WorldWideScience

Sample records for optimisation equilibrium model

  1. Medium-term generation programming in competitive environments: a new optimisation approach for market equilibrium computing

    International Nuclear Information System (INIS)

    Barquin, J.; Centeno, E.; Reneses, J.

    2004-01-01

    The paper proposes a model to represent medium-term hydro-thermal operation of electrical power systems in deregulated frameworks. The model objective is to compute the oligopolistic market equilibrium point in which each utility maximises its profit, based on other firms' behaviour. This problem is not an optimisation one. The main contribution of the paper is to demonstrate that, nevertheless, under some reasonable assumptions, it can be formulated as an equivalent minimisation problem. A computer program has been coded by using the proposed approach. It is used to compute the market equilibrium of a real-size system. (author)

  2. Comprehensive optimisation of China’s energy prices, taxes and subsidy policies based on the dynamic computable general equilibrium model

    International Nuclear Information System (INIS)

    He, Y.X.; Liu, Y.Y.; Du, M.; Zhang, J.X.; Pang, Y.X.

    2015-01-01

    Highlights: • Energy policy is defined as a complication of energy price, tax and subsidy policies. • The maximisation of total social benefit is the optimised objective. • A more rational carbon tax ranges from 10 to 20 Yuan/ton under the current situation. • The optimal coefficient pricing is more conducive to maximise total social benefit. - Abstract: Under the condition of increasingly serious environmental pollution, rational energy policy plays an important role in the practical significance of energy conservation and emission reduction. This paper defines energy policies as the compilation of energy prices, taxes and subsidy policies. Moreover, it establishes the optimisation model of China’s energy policy based on the dynamic computable general equilibrium model, which maximises the total social benefit, in order to explore the comprehensive influences of a carbon tax, the sales pricing mechanism and the renewable energy fund policy. The results show that when the change rates of gross domestic product and consumer price index are ±2%, ±5% and the renewable energy supply structure ratio is 7%, the more reasonable carbon tax ranges from 10 to 20 Yuan/ton, and the optimal coefficient pricing mechanism is more conducive to the objective of maximising the total social benefit. From the perspective of optimising the overall energy policies, if the upper limit of change rate in consumer price index is 2.2%, the existing renewable energy fund should be improved

  3. Optimisation of timetable-based, stochastic transit assignment models based on MSA

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker; Frederiksen, Rasmus Dyhr

    2006-01-01

    (CRM), such a large-scale transit assignment model was developed and estimated. The Stochastic User Equilibrium problem was solved by the Method of Successive Averages (MSA). However, the model suffered from very large calculation times. The paper focuses on how to optimise transit assignment models...

  4. Turbulence optimisation in stellarator experiments

    Energy Technology Data Exchange (ETDEWEB)

    Proll, Josefine H.E. [Max-Planck/Princeton Center for Plasma Physics (Germany); Max-Planck-Institut fuer Plasmaphysik, Wendelsteinstr. 1, 17491 Greifswald (Germany); Faber, Benjamin J. [HSX Plasma Laboratory, University of Wisconsin-Madison, Madison, WI 53706 (United States); Helander, Per; Xanthopoulos, Pavlos [Max-Planck/Princeton Center for Plasma Physics (Germany); Lazerson, Samuel A.; Mynick, Harry E. [Plasma Physics Laboratory, Princeton University, P.O. Box 451 Princeton, New Jersey 08543-0451 (United States)

    2015-05-01

    Stellarators, the twisted siblings of the axisymmetric fusion experiments called tokamaks, have historically suffered from confining the heat of the plasma insufficiently compared with tokamaks and were therefore considered to be less promising candidates for a fusion reactor. This has changed, however, with the advent of stellarators in which the laminar transport is reduced to levels below that of tokamaks by shaping the magnetic field accordingly. As in tokamaks, the turbulent transport remains as the now dominant transport channel. Recent analytical theory suggests that the large configuration space of stellarators allows for an additional optimisation of the magnetic field to also reduce the turbulent transport. In this talk, the idea behind the turbulence optimisation is explained. We also present how an optimised equilibrium is obtained and how it might differ from the equilibrium field of an already existing device, and we compare experimental turbulence measurements in different configurations of the HSX stellarator in order to test the optimisation procedure.

  5. Optimisation of BPMN Business Models via Model Checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2013-01-01

    We present a framework for the optimisation of business processes modelled in the business process modelling language BPMN, which builds upon earlier work, where we developed a model checking based method for the analysis of BPMN models. We define a structure for expressing optimisation goals...... for synthesized BPMN components, based on probabilistic computation tree logic and real-valued reward structures of the BPMN model, allowing for the specification of complex quantitative goals. We here present a simple algorithm, inspired by concepts from evolutionary algorithms, which iteratively generates...

  6. Navigating catastrophes: Local but not global optimisation allows for macro-economic navigation of crises

    Science.gov (United States)

    Harré, Michael S.

    2013-02-01

    Two aspects of modern economic theory have dominated the recent discussion on the state of the global economy: Crashes in financial markets and whether or not traditional notions of economic equilibrium have any validity. We have all seen the consequences of market crashes: plummeting share prices, businesses collapsing and considerable uncertainty throughout the global economy. This seems contrary to what might be expected of a system in equilibrium where growth dominates the relatively minor fluctuations in prices. Recent work from within economics as well as by physicists, psychologists and computational scientists has significantly improved our understanding of the more complex aspects of these systems. With this interdisciplinary approach in mind, a behavioural economics model of local optimisation is introduced and three general properties are proven. The first is that under very specific conditions local optimisation leads to a conventional macro-economic notion of a global equilibrium. The second is that if both global optimisation and economic growth are required then under very mild assumptions market catastrophes are an unavoidable consequence. Third, if only local optimisation and economic growth are required then there is sufficient parametric freedom for macro-economic policy makers to steer an economy around catastrophes without overtly disrupting local optimisation.

  7. Equilibrium models and variational inequalities

    CERN Document Server

    Konnov, Igor

    2007-01-01

    The concept of equilibrium plays a central role in various applied sciences, such as physics (especially, mechanics), economics, engineering, transportation, sociology, chemistry, biology and other fields. If one can formulate the equilibrium problem in the form of a mathematical model, solutions of the corresponding problem can be used for forecasting the future behavior of very complex systems and, also, for correcting the the current state of the system under control. This book presents a unifying look on different equilibrium concepts in economics, including several models from related sciences.- Presents a unifying look on different equilibrium concepts and also the present state of investigations in this field- Describes static and dynamic input-output models, Walras, Cassel-Wald, spatial price, auction market, oligopolistic equilibrium models, transportation and migration equilibrium models- Covers the basics of theory and solution methods both for the complementarity and variational inequality probl...

  8. Optimisation of NMR dynamic models II. A new methodology for the dual optimisation of the model-free parameters and the Brownian rotational diffusion tensor

    International Nuclear Information System (INIS)

    D'Auvergne, Edward J.; Gooley, Paul R.

    2008-01-01

    Finding the dynamics of an entire macromolecule is a complex problem as the model-free parameter values are intricately linked to the Brownian rotational diffusion of the molecule, mathematically through the autocorrelation function of the motion and statistically through model selection. The solution to this problem was formulated using set theory as an element of the universal set U-the union of all model-free spaces (d'Auvergne EJ and Gooley PR (2007) Mol BioSyst 3(7), 483-494). The current procedure commonly used to find the universal solution is to initially estimate the diffusion tensor parameters, to optimise the model-free parameters of numerous models, and then to choose the best model via model selection. The global model is then optimised and the procedure repeated until convergence. In this paper a new methodology is presented which takes a different approach to this diffusion seeded model-free paradigm. Rather than starting with the diffusion tensor this iterative protocol begins by optimising the model-free parameters in the absence of any global model parameters, selecting between all the model-free models, and finally optimising the diffusion tensor. The new model-free optimisation protocol will be validated using synthetic data from Schurr JM et al. (1994) J Magn Reson B 105(3), 211-224 and the relaxation data of the bacteriorhodopsin (1-36)BR fragment from Orekhov VY (1999) J Biomol NMR 14(4), 345-356. To demonstrate the importance of this new procedure the NMR relaxation data of the Olfactory Marker Protein (OMP) of Gitti R et al. (2005) Biochem 44(28), 9673-9679 is reanalysed. The result is that the dynamics for certain secondary structural elements is very different from those originally reported

  9. FISHRENT; Bio-economic simulation and optimisation model

    NARCIS (Netherlands)

    Salz, P.; Buisman, F.C.; Soma, K.; Frost, H.; Accadia, P.; Prellezo, R.

    2011-01-01

    Key findings: The FISHRENT model is a major step forward in bio-economic model-ling, combining features that have not been fully integrated in earlier models: 1- Incorporation of any number of species (or stock) and/or fleets 2- Integration of simulation and optimisation over a period of 25 years 3-

  10. Crystal structure optimisation using an auxiliary equation of state

    Science.gov (United States)

    Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; Walsh, Aron

    2015-11-01

    Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.

  11. Crystal structure optimisation using an auxiliary equation of state

    International Nuclear Information System (INIS)

    Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; 3 Institute and Department of Materials Science and Engineering, Yonsei University, Seoul 120-749 (Korea, Republic of))" data-affiliation=" (Centre for Sustainable Chemical Technologies and Department of Chemistry, University of Bath, Claverton Down, Bath BA2 7AY (United Kingdom); Global E3 Institute and Department of Materials Science and Engineering, Yonsei University, Seoul 120-749 (Korea, Republic of))" >Walsh, Aron

    2015-01-01

    Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy–volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other “beyond” density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu 2 ZnSnS 4 and the magnetic metal-organic framework HKUST-1

  12. Crystal structure optimisation using an auxiliary equation of state

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T. [Centre for Sustainable Chemical Technologies and Department of Chemistry, University of Bath, Claverton Down, Bath BA2 7AY (United Kingdom); Walsh, Aron, E-mail: a.walsh@bath.ac.uk [Centre for Sustainable Chemical Technologies and Department of Chemistry, University of Bath, Claverton Down, Bath BA2 7AY (United Kingdom); Global E" 3 Institute and Department of Materials Science and Engineering, Yonsei University, Seoul 120-749 (Korea, Republic of)

    2015-11-14

    Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy–volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other “beyond” density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu{sub 2}ZnSnS{sub 4} and the magnetic metal-organic framework HKUST-1.

  13. A Multiperiod Equilibrium Pricing Model

    Directory of Open Access Journals (Sweden)

    Minsuk Kwak

    2014-01-01

    Full Text Available We propose an equilibrium pricing model in a dynamic multiperiod stochastic framework with uncertain income. There are one tradable risky asset (stock/commodity, one nontradable underlying (temperature, and also a contingent claim (weather derivative written on the tradable risky asset and the nontradable underlying in the market. The price of the contingent claim is priced in equilibrium by optimal strategies of representative agent and market clearing condition. The risk preferences are of exponential type with a stochastic coefficient of risk aversion. Both subgame perfect strategy and naive strategy are considered and the corresponding equilibrium prices are derived. From the numerical result we examine how the equilibrium prices vary in response to changes in model parameters and highlight the importance of our equilibrium pricing principle.

  14. Non-equilibrium dog-flea model

    Science.gov (United States)

    Ackerson, Bruce J.

    2017-11-01

    We develop the open dog-flea model to serve as a check of proposed non-equilibrium theories of statistical mechanics. The model is developed in detail. Then it is applied to four recent models for non-equilibrium statistical mechanics. Comparison of the dog-flea solution with these different models allows checking claims and giving a concrete example of the theoretical models.

  15. Non-equilibrium modelling of distillation

    NARCIS (Netherlands)

    Wesselingh, JA; Darton, R

    1997-01-01

    There are nasty conceptual problems in the classical way of describing distillation columns via equilibrium stages, and efficiencies or HETP's. We can nowadays avoid these problems by simulating the behaviour of a complete column in one go using a non-equilibrium model. Such a model has phase

  16. A national optimisation model for energy wood streams; Energiapuuvirtojen valtakunnallinen optimointimalli

    Energy Technology Data Exchange (ETDEWEB)

    Iikkanen, P.; Keskinen, S.; Korpilahti, A.; Raesaenen, T.; Sirkiae, A.

    2011-07-01

    In 2010 a total of 12,5 terawatt hours of forest energy was used in Finland's heat and power plants. According to studies by Metsaeteho and Poeyry, use of energy wood will nearly double to 21.6 terawatt hours by 2020. There are also plans to use energy wood as a raw material for biofuel plants. The techno-ecological supply potential of energy wood in 2020 is estimated at 42.9 terawatt hours. Energy wood has been transported almost entirely by road. The situation is changing, however, because growing demand for energy wood will expand raw wood procurement areas and lengthen transport distances. A cost-effective transport system therefore also requires the use of rail and waterway transports. In Finland, however, there is almost a complete absence of the terminals required for the use of rail and waterway transports; where energy wood is chipped, temporarily stored and loaded onto railway wagons and vessels for further transport. A national optimisation model for energy wood has been developed to serve transport system planning in particular. The linear optimisation model optimises, on a national level, goods streams between supply points and usage points based on forest energy procurement costs. The model simultaneously covers deliveries of forest chips, stumps and small-sized thinning wood. The procurement costs used in the optimisation include the costs of the energy wood's roadside price, chipping, transport and terminal handling. The transport system described in the optimisation model consists of wood supply points (2007 municipality precision), wood usage points, railway terminals and the connections between them along the main road and rail network. Elements required for the examination of waterway transports can also be easily added to the model. The optimisation model can be used to examine, for example, the effects of changes of energy wood demand and supply as well as transport costs on energy wood goods streams, the relative use of different

  17. Energy efficiency optimisation for distillation column using artificial neural network models

    International Nuclear Information System (INIS)

    Osuolale, Funmilayo N.; Zhang, Jie

    2016-01-01

    This paper presents a neural network based strategy for the modelling and optimisation of energy efficiency in distillation columns incorporating the second law of thermodynamics. Real-time optimisation of distillation columns based on mechanistic models is often infeasible due to the effort in model development and the large computation effort associated with mechanistic model computation. This issue can be addressed by using neural network models which can be quickly developed from process operation data. The computation time in neural network model evaluation is very short making them ideal for real-time optimisation. Bootstrap aggregated neural networks are used in this study for enhanced model accuracy and reliability. Aspen HYSYS is used for the simulation of the distillation systems. Neural network models for exergy efficiency and product compositions are developed from simulated process operation data and are used to maximise exergy efficiency while satisfying products qualities constraints. Applications to binary systems of methanol-water and benzene-toluene separations culminate in a reduction of utility consumption of 8.2% and 28.2% respectively. Application to multi-component separation columns also demonstrate the effectiveness of the proposed method with a 32.4% improvement in the exergy efficiency. - Highlights: • Neural networks can accurately model exergy efficiency in distillation columns. • Bootstrap aggregated neural network offers improved model prediction accuracy. • Improved exergy efficiency is obtained through model based optimisation. • Reductions of utility consumption by 8.2% and 28.2% were achieved for binary systems. • The exergy efficiency for multi-component distillation is increased by 32.4%.

  18. Simulation and optimisation modelling approach for operation of the Hoa Binh Reservoir, Vietnam

    DEFF Research Database (Denmark)

    Ngo, Long le; Madsen, Henrik; Rosbjerg, Dan

    2007-01-01

    Hoa Binh, the largest reservoir in Vietnam, plays an important role in flood control for the Red River delta and hydropower generation. Due to its multi-purpose character, conflicts and disputes in operating the reservoir have been ongoing since its construction, particularly in the flood season....... This paper proposes to optimise the control strategies for the Hoa Binh reservoir operation by applying a combination of simulation and optimisation models. The control strategies are set up in the MIKE 11 simulation model to guide the releases of the reservoir system according to the current storage level......, the hydro-meteorological conditions, and the time of the year. A heuristic global optimisation tool, the shuffled complex evolution (SCE) algorithm, is adopted for optimising the reservoir operation. The optimisation puts focus on the trade-off between flood control and hydropower generation for the Hoa...

  19. A knowledge representation model for the optimisation of electricity generation mixes

    International Nuclear Information System (INIS)

    Chee Tahir, Aidid; Bañares-Alcántara, René

    2012-01-01

    Highlights: ► Prototype energy model which uses semantic representation (ontologies). ► Model accepts both quantitative and qualitative based energy policy goals. ► Uses logic inference to formulate equations for linear optimisation. ► Proposes electricity generation mix based on energy policy goals. -- Abstract: Energy models such as MARKAL, MESSAGE and DNE-21 are optimisation tools which aid in the formulation of energy policies. The strength of these models lie in their solid theoretical foundations built on rigorous mathematical equations designed to process numerical (quantitative) data related to economics and the environment. Nevertheless, a complete consideration of energy policy issues also requires the consideration of the political and social aspects of energy. These political and social issues are often associated with non-numerical (qualitative) information. To enable the evaluation of these aspects in a computer model, we hypothesise that a different approach to energy model optimisation design is required. A prototype energy model that is based on a semantic representation using ontologies and is integrated to engineering models implemented in Java has been developed. The model provides both quantitative and qualitative evaluation capabilities through the use of logical inference. The semantic representation of energy policy goals is used (i) to translate a set of energy policy goals into a set of logic queries which is then used to determine the preferred electricity generation mix and (ii) to assist in the formulation of a set of equations which is then solved in order to obtain a proposed electricity generation mix. Scenario case studies have been developed and tested on the prototype energy model to determine its capabilities. Knowledge queries were made on the semantic representation to determine an electricity generation mix which fulfilled a set of energy policy goals (e.g. CO 2 emissions reduction, water conservation, energy supply

  20. Helical axis stellarator equilibrium model

    International Nuclear Information System (INIS)

    Koniges, A.E.; Johnson, J.L.

    1985-02-01

    An asymptotic model is developed to study MHD equilibria in toroidal systems with a helical magnetic axis. Using a characteristic coordinate system based on the vacuum field lines, the equilibrium problem is reduced to a two-dimensional generalized partial differential equation of the Grad-Shafranov type. A stellarator-expansion free-boundary equilibrium code is modified to solve the helical-axis equations. The expansion model is used to predict the equilibrium properties of Asperators NP-3 and NP-4. Numerically determined flux surfaces, magnetic well, transform, and shear are presented. The equilibria show a toroidal Shafranov shift

  1. Simulation optimisation

    International Nuclear Information System (INIS)

    Anon

    2010-01-01

    Over the past decade there has been a significant advance in flotation circuit optimisation through performance benchmarking using metallurgical modelling and steady-state computer simulation. This benchmarking includes traditional measures, such as grade and recovery, as well as new flotation measures, such as ore floatability, bubble surface area flux and froth recovery. To further this optimisation, Outotec has released its HSC Chemistry software with simulation modules. The flotation model developed by the AMIRA P9 Project, of which Outotec is a sponsor, is regarded by industry as the most suitable flotation model to use for circuit optimisation. This model incorporates ore floatability with flotation cell pulp and froth parameters, residence time, entrainment and water recovery. Outotec's HSC Sim enables you to simulate mineral processes in different levels, from comminution circuits with sizes and no composition, through to flotation processes with minerals by size by floatability components, to full processes with true particles with MLA data.

  2. MERGE-ETL: An Optimisation Equilibrium Model with Two Different Endogeneous Technological Learning Formulations

    Energy Technology Data Exchange (ETDEWEB)

    Bahn, O.; Kypreos, S.

    2002-07-01

    In MERGE-ETL, endogenous technological progress is applied to eight energy technologies: six power plants (integrated coal gasification with combined cycle, gas turbine with combined cycle, gas fuel cell, new nuclear designs, wind turbine and solar photovoltaic) and two plants producing hydrogen (from biomass and solar photovoltaic). Furthermore, compared to the original MERGE model, we have introduced two new power plants (using coal and gas) with CO{sub 2} capture and disposal into depleted oil and gas reservoirs. The difficulty with incorporating endogenous technological progress in MERGE comes from the resulting formulation of the MERGE-ETL model. Indeed, technological learning is related to increasing returns to adoption, and the mathematical formulation of MERGE-ETL corresponds then to a (non-linear and) non-convex optimisation problem. To solve MERGE-ETL, we have devised a three-step heuristic approach, where we search for the global optimum in an iterative way. We use in particular for this a linearisation, following mixed integer programming techniques, of the bottom-up part of MERGE-ETL. To study the impacts of modelling endogenous technological change in MERGE, we have considered several scenarios related to technological learning and carbon control. The latter corresponds to a 'soft landing' of world energy related CO{sub 2} emissions to a level of 10 Gt C by 2050, and takes into account the recent (2001) Marrakech Agreements for CO{sub 2} emission limits by 2010. Notice that our baseline scenario (without emission control and endogenous technological change) is consistent, in particular in terms of population and CO{sub 2} emissions, with the IPCC B2 scenario. Our numerical application with MERGE-ETL shows that technological learning yields an increase of primary energy use and of electricity generation. Indeed, energy production, and in particular electricity generation, become less expensive over-time. Energy (electricity, but also non

  3. Micro Data and General Equilibrium Models

    DEFF Research Database (Denmark)

    Browning, Martin; Hansen, Lars Peter; Heckman, James J.

    1999-01-01

    Dynamic general equilibrium models are required to evaluate policies applied at the national level. To use these models to make quantitative forecasts requires knowledge of an extensive array of parameter values for the economy at large. This essay describes the parameters required for different...... economic models, assesses the discordance between the macromodels used in policy evaluation and the microeconomic models used to generate the empirical evidence. For concreteness, we focus on two general equilibrium models: the stochastic growth model extended to include some forms of heterogeneity...

  4. TEM turbulence optimisation in stellarators

    Science.gov (United States)

    Proll, J. H. E.; Mynick, H. E.; Xanthopoulos, P.; Lazerson, S. A.; Faber, B. J.

    2016-01-01

    With the advent of neoclassically optimised stellarators, optimising stellarators for turbulent transport is an important next step. The reduction of ion-temperature-gradient-driven turbulence has been achieved via shaping of the magnetic field, and the reduction of trapped-electron mode (TEM) turbulence is addressed in the present paper. Recent analytical and numerical findings suggest TEMs are stabilised when a large fraction of trapped particles experiences favourable bounce-averaged curvature. This is the case for example in Wendelstein 7-X (Beidler et al 1990 Fusion Technol. 17 148) and other Helias-type stellarators. Using this knowledge, a proxy function was designed to estimate the TEM dynamics, allowing optimal configurations for TEM stability to be determined with the STELLOPT (Spong et al 2001 Nucl. Fusion 41 711) code without extensive turbulence simulations. A first proof-of-principle optimised equilibrium stemming from the TEM-dominated stellarator experiment HSX (Anderson et al 1995 Fusion Technol. 27 273) is presented for which a reduction of the linear growth rates is achieved over a broad range of the operational parameter space. As an important consequence of this property, the turbulent heat flux levels are reduced compared with the initial configuration.

  5. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    Science.gov (United States)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2016-06-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  6. Comparative evaluation of kinetic, equilibrium and semi-equilibrium models for biomass gasification

    Energy Technology Data Exchange (ETDEWEB)

    Buragohain, Buljit [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Chakma, Sankar; Kumar, Peeush [Department of Chemical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Mahanta, Pinakeswar [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Department of Mechanical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Moholkar, Vijayanand S. [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Department of Chemical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India)

    2013-07-01

    Modeling of biomass gasification has been an active area of research for past two decades. In the published literature, three approaches have been adopted for the modeling of this process, viz. thermodynamic equilibrium, semi-equilibrium and kinetic. In this paper, we have attempted to present a comparative assessment of these three types of models for predicting outcome of the gasification process in a circulating fluidized bed gasifier. Two model biomass, viz. rice husk and wood particles, have been chosen for analysis, with gasification medium being air. Although the trends in molar composition, net yield and LHV of the producer gas predicted by three models are in concurrence, significant quantitative difference is seen in the results. Due to rather slow kinetics of char gasification and tar oxidation, carbon conversion achieved in single pass of biomass through the gasifier, calculated using kinetic model, is quite low, which adversely affects the yield and LHV of the producer gas. Although equilibrium and semi-equilibrium models reveal relative insensitivity of producer gas characteristics towards temperature, the kinetic model shows significant effect of temperature on LHV of the gas at low air ratios. Kinetic models also reveal volume of the gasifier to be an insignificant parameter, as the net yield and LHV of the gas resulting from 6 m and 10 m riser is same. On a whole, the analysis presented in this paper indicates that thermodynamic models are useful tools for quantitative assessment of the gasification process, while kinetic models provide physically more realistic picture.

  7. Parameter Optimisation for the Behaviour of Elastic Models over Time

    DEFF Research Database (Denmark)

    Mosegaard, Jesper

    2004-01-01

    Optimisation of parameters for elastic models is essential for comparison or finding equivalent behaviour of elastic models when parameters cannot simply be transferred or converted. This is the case with a large range of commonly used elastic models. In this paper we present a general method tha...

  8. Non-equilibrium Quasi-Chemical Nucleation Model

    Science.gov (United States)

    Gorbachev, Yuriy E.

    2018-04-01

    Quasi-chemical model, which is widely used for nucleation description, is revised on the basis of recent results in studying of non-equilibrium effects in reacting gas mixtures (Kolesnichenko and Gorbachev in Appl Math Model 34:3778-3790, 2010; Shock Waves 23:635-648, 2013; Shock Waves 27:333-374, 2017). Non-equilibrium effects in chemical reactions are caused by the chemical reactions themselves and therefore these contributions should be taken into account in the corresponding expressions for reaction rates. Corrections to quasi-equilibrium reaction rates are of two types: (a) spatially homogeneous (caused by physical-chemical processes) and (b) spatially inhomogeneous (caused by gas expansion/compression processes and proportional to the velocity divergency). Both of these processes play an important role during the nucleation and are included into the proposed model. The method developed for solving the generalized Boltzmann equation for chemically reactive gases is applied for solving the set of equations of the revised quasi-chemical model. It is shown that non-equilibrium processes lead to essential deviation of the quasi-stationary distribution and therefore the nucleation rate from its traditional form.

  9. Statistical meandering wake model and its application to yaw-angle optimisation of wind farms

    DEFF Research Database (Denmark)

    Thøgersen, Emil; Tranberg, Bo; Herp, Jürgen

    2017-01-01

    deterministic models to a statistical meandering wake model (SMWM), where a random directional deflection is assigned to a narrow wake in such a way that on average it resembles a broad Jensen wake. In a second step, the model is further generalised to wind-farm level, where the deflections of the multiple...... wakes are treated as independently and identically distributed random variables. When carefully calibrated to the Nysted wind farm, the ensemble average of the statistical model produces the same wind-direction dependence of the power efficiency as obtained from the standard Jensen model. Upon using...... the JWM to perform a yaw-angle optimisation of wind-farm power output, we find an optimisation gain of 6.7% for the Nysted wind farm when compared to zero yaw angles and averaged over all wind directions. When applying the obtained JWM-based optimised yaw angles to the SMWM, the ensemble-averaged gain...

  10. Statistical meandering wake model and its application to yaw-angle optimisation of wind farms

    International Nuclear Information System (INIS)

    Thøgersen, E; Tranberg, B; Greiner, M; Herp, J

    2017-01-01

    The wake produced by a wind turbine is dynamically meandering and of rather narrow nature. Only when looking at large time averages, the wake appears to be static and rather broad, and is then well described by simple engineering models like the Jensen wake model (JWM). We generalise the latter deterministic models to a statistical meandering wake model (SMWM), where a random directional deflection is assigned to a narrow wake in such a way that on average it resembles a broad Jensen wake. In a second step, the model is further generalised to wind-farm level, where the deflections of the multiple wakes are treated as independently and identically distributed random variables. When carefully calibrated to the Nysted wind farm, the ensemble average of the statistical model produces the same wind-direction dependence of the power efficiency as obtained from the standard Jensen model. Upon using the JWM to perform a yaw-angle optimisation of wind-farm power output, we find an optimisation gain of 6.7% for the Nysted wind farm when compared to zero yaw angles and averaged over all wind directions. When applying the obtained JWM-based optimised yaw angles to the SMWM, the ensemble-averaged gain is calculated to be 7.5%. This outcome indicates the possible operational robustness of an optimised yaw control for real-life wind farms. (paper)

  11. Statistical meandering wake model and its application to yaw-angle optimisation of wind farms

    Science.gov (United States)

    Thøgersen, E.; Tranberg, B.; Herp, J.; Greiner, M.

    2017-05-01

    The wake produced by a wind turbine is dynamically meandering and of rather narrow nature. Only when looking at large time averages, the wake appears to be static and rather broad, and is then well described by simple engineering models like the Jensen wake model (JWM). We generalise the latter deterministic models to a statistical meandering wake model (SMWM), where a random directional deflection is assigned to a narrow wake in such a way that on average it resembles a broad Jensen wake. In a second step, the model is further generalised to wind-farm level, where the deflections of the multiple wakes are treated as independently and identically distributed random variables. When carefully calibrated to the Nysted wind farm, the ensemble average of the statistical model produces the same wind-direction dependence of the power efficiency as obtained from the standard Jensen model. Upon using the JWM to perform a yaw-angle optimisation of wind-farm power output, we find an optimisation gain of 6.7% for the Nysted wind farm when compared to zero yaw angles and averaged over all wind directions. When applying the obtained JWM-based optimised yaw angles to the SMWM, the ensemble-averaged gain is calculated to be 7.5%. This outcome indicates the possible operational robustness of an optimised yaw control for real-life wind farms.

  12. Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model

    Science.gov (United States)

    Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.

    2017-09-01

    The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.

  13. Optimisation of Hidden Markov Model using Baum–Welch algorithm

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 126; Issue 1. Optimisation of Hidden Markov Model using Baum–Welch algorithm for prediction of maximum and minimum temperature over Indian Himalaya. J C Joshi Tankeshwar Kumar Sunita Srivastava Divya Sachdeva. Volume 126 Issue 1 February 2017 ...

  14. Economic and Mathematical Modelling of Optimisation of Transaction Expenses of Engineering Enterprises

    OpenAIRE

    Makaliuk Iryna V.

    2014-01-01

    The article identifies stages of the process of optimisation of transaction expenses. It develops an economic and mathematical model of optimisation of transaction expenses of engineering enterprises by the criterion of maximisation of income from realisation of products and system of restrictions, which envisages exceeding income growth rate over the expenses growth rate. The article offers to use types of expenses by accounting accounts as indicators of transaction expenses. In the result o...

  15. Particle swarm optimisation classical and quantum perspectives

    CERN Document Server

    Sun, Jun; Wu, Xiao-Jun

    2016-01-01

    IntroductionOptimisation Problems and Optimisation MethodsRandom Search TechniquesMetaheuristic MethodsSwarm IntelligenceParticle Swarm OptimisationOverviewMotivationsPSO Algorithm: Basic Concepts and the ProcedureParadigm: How to Use PSO to Solve Optimisation ProblemsSome Harder Examples Some Variants of Particle Swarm Optimisation Why Does the PSO Algorithm Need to Be Improved? Inertia and Constriction-Acceleration Techniques for PSOLocal Best ModelProbabilistic AlgorithmsOther Variants of PSO Quantum-Behaved Particle Swarm Optimisation OverviewMotivation: From Classical Dynamics to Quantum MechanicsQuantum Model: Fundamentals of QPSOQPSO AlgorithmSome Essential ApplicationsSome Variants of QPSOSummary Advanced Topics Behaviour Analysis of Individual ParticlesConvergence Analysis of the AlgorithmTime Complexity and Rate of ConvergenceParameter Selection and PerformanceSummaryIndustrial Applications Inverse Problems for Partial Differential EquationsInverse Problems for Non-Linear Dynamical SystemsOptimal De...

  16. Optimisation of technical specifications using probabilistic methods

    International Nuclear Information System (INIS)

    Ericsson, G.; Knochenhauer, M.; Hultqvist, G.

    1986-01-01

    During the last few years the development of methods for modifying and optimising nuclear power plant Technical Specifications (TS) for plant operations has received increased attention. Probalistic methods in general, and the plant and system models of probabilistic safety assessment (PSA) in particular, seem to provide the most forceful tools for optimisation. This paper first gives some general comments on optimisation, identifying important parameters and then gives a description of recent Swedish experiences from the use of nuclear power plant PSA models and results for TS optimisation

  17. On the impact of optimisation models in maintenance decision making: the state of the art

    International Nuclear Information System (INIS)

    Dekker, Rommert; Scarf, Philip A.

    1998-01-01

    In this paper we discuss the state of the art in applications of maintenance optimisation models. After giving a short introduction to the area, we consider several ways in which models may be used to optimise maintenance, such as case studies, operational and strategic decision support systems, and give examples of each of them. Next we discuss several areas where the models have been applied successfully. These include civil structure and aeroplane maintenance. From a comparative point of view, we discuss future prospects

  18. The DART general equilibrium model: A technical description

    OpenAIRE

    Springer, Katrin

    1998-01-01

    This paper provides a technical description of the Dynamic Applied Regional Trade (DART) General Equilibrium Model. The DART model is a recursive dynamic, multi-region, multi-sector computable general equilibrium model. All regions are fully specified and linked by bilateral trade flows. The DART model can be used to project economic activities, energy use and trade flows for each of the specified regions to simulate various trade policy as well as environmental policy scenarios, and to analy...

  19. Optimising Shovel-Truck Fuel Consumption using Stochastic ...

    African Journals Online (AJOL)

    Optimising the fuel consumption and truck waiting time can result in significant fuel savings. The paper demonstrates that stochastic simulation is an effective tool for optimising the utilisation of fossil-based fuels in mining and related industries. Keywords: Stochastic, Simulation Modelling, Mining, Optimisation, Shovel-Truck ...

  20. Water quality modelling and optimisation of wastewater treatment network using mixed integer programming

    CSIR Research Space (South Africa)

    Mahlathi, Christopher

    2016-10-01

    Full Text Available Instream water quality management encompasses field monitoring and utilisation of mathematical models. These models can be coupled with optimisation techniques to determine more efficient water quality management alternatives. Among these activities...

  1. Non-Equilibrium Turbulence and Two-Equation Modeling

    Science.gov (United States)

    Rubinstein, Robert

    2011-01-01

    Two-equation turbulence models are analyzed from the perspective of spectral closure theories. Kolmogorov theory provides useful information for models, but it is limited to equilibrium conditions in which the energy spectrum has relaxed to a steady state consistent with the forcing at large scales; it does not describe transient evolution between such states. Transient evolution is necessarily through nonequilibrium states, which can only be found from a theory of turbulence evolution, such as one provided by a spectral closure. When the departure from equilibrium is small, perturbation theory can be used to approximate the evolution by a two-equation model. The perturbation theory also gives explicit conditions under which this model can be valid, and when it will fail. Implications of the non-equilibrium corrections for the classic Tennekes-Lumley balance in the dissipation rate equation are drawn: it is possible to establish both the cancellation of the leading order Re1/2 divergent contributions to vortex stretching and enstrophy destruction, and the existence of a nonzero difference which is finite in the limit of infinite Reynolds number.

  2. Phylogenies support out-of-equilibrium models of biodiversity.

    Science.gov (United States)

    Manceau, Marc; Lambert, Amaury; Morlon, Hélène

    2015-04-01

    There is a long tradition in ecology of studying models of biodiversity at equilibrium. These models, including the influential Neutral Theory of Biodiversity, have been successful at predicting major macroecological patterns, such as species abundance distributions. But they have failed to predict macroevolutionary patterns, such as those captured in phylogenetic trees. Here, we develop a model of biodiversity in which all individuals have identical demographic rates, metacommunity size is allowed to vary stochastically according to population dynamics, and speciation arises naturally from the accumulation of point mutations. We show that this model generates phylogenies matching those observed in nature if the metacommunity is out of equilibrium. We develop a likelihood inference framework that allows fitting our model to empirical phylogenies, and apply this framework to various mammalian families. Our results corroborate the hypothesis that biodiversity dynamics are out of equilibrium. © 2015 John Wiley & Sons Ltd/CNRS.

  3. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  4. Self-optimisation and model-based design of experiments for developing a C–H activation flow process

    Directory of Open Access Journals (Sweden)

    Alexander Echtermeyer

    2017-01-01

    Full Text Available A recently described C(sp3–H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.

  5. On non-equilibrium states in QFT model with boundary interaction

    International Nuclear Information System (INIS)

    Bazhanov, Vladimir V.; Lukyanov, Sergei L.; Zamolodchikov, Alexander B.

    1999-01-01

    We prove that certain non-equilibrium expectation values in the boundary sine-Gordon model coincide with associated equilibrium-state expectation values in the systems which differ from the boundary sine-Gordon in that certain extra boundary degrees of freedom (q-oscillators) are added. Applications of this result to actual calculation of non-equilibrium characteristics of the boundary sine-Gordon model are also discussed

  6. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  7. Optimisation of a parallel ocean general circulation model

    Science.gov (United States)

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  8. Share-of-Surplus Product Line Optimisation with Price Levels

    Directory of Open Access Journals (Sweden)

    X. G. Luo

    2014-01-01

    Full Text Available Kraus and Yano (2003 established the share-of-surplus product line optimisation model and developed a heuristic procedure for this nonlinear mixed-integer optimisation model. In their model, price of a product is defined as a continuous decision variable. However, because product line optimisation is a planning process in the early stage of product development, pricing decisions usually are not very precise. In this research, a nonlinear integer programming share-of-surplus product line optimization model that allows the selection of candidate price levels for products is established. The model is further transformed into an equivalent linear mixed-integer optimisation model by applying linearisation techniques. Experimental results in different market scenarios show that the computation time of the transformed model is much less than that of the original model.

  9. Design of optimised backstepping controller for the synchronisation ...

    Indian Academy of Sciences (India)

    Ehsan Fouladi

    2017-12-18

    Dec 18, 2017 ... for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller. Keywords. Colpitts oscillator; backstepping controller; chaos synchronisation; shark smell algorithm; particle .... The velocity model is based on the gradient of the objective function, tilting ...

  10. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  11. Methodology and Toolset for Model Verification, Hardware/Software co-simulation, Performance Optimisation and Customisable Source-code generation

    DEFF Research Database (Denmark)

    Berger, Michael Stübert; Soler, José; Yu, Hao

    2013-01-01

    The MODUS project aims to provide a pragmatic and viable solution that will allow SMEs to substantially improve their positioning in the embedded-systems development market. The MODUS tool will provide a model verification and Hardware/Software co-simulation tool (TRIAL) and a performance...... optimisation and customisable source-code generation tool (TUNE). The concept is depicted in automated modelling and optimisation of embedded-systems development. The tool will enable model verification by guiding the selection of existing open-source model verification engines, based on the automated analysis...

  12. Parallel unstructured mesh optimisation for 3D radiation transport and fluids modelling

    International Nuclear Information System (INIS)

    Gorman, G.J.; Pain, Ch. C.; Oliveira, C.R.E. de; Umpleby, A.P.; Goddard, A.J.H.

    2003-01-01

    In this paper we describe the theory and application of a parallel mesh optimisation procedure to obtain self-adapting finite element solutions on unstructured tetrahedral grids. The optimisation procedure adapts the tetrahedral mesh to the solution of a radiation transport or fluid flow problem without sacrificing the integrity of the boundary (geometry), or internal boundaries (regions) of the domain. The objective is to obtain a mesh which has both a uniform interpolation error in any direction and the element shapes are of good quality. This is accomplished with use of a non-Euclidean (anisotropic) metric which is related to the Hessian of the solution field. Appropriate scaling of the metric enables the resolution of multi-scale phenomena as encountered in transient incompressible fluids and multigroup transport calculations. The resulting metric is used to calculate element size and shape quality. The mesh optimisation method is based on a series of mesh connectivity and node position searches of the landscape defining mesh quality which is gauged by a functional. The mesh modification thus fits the solution field(s) in an optimal manner. The parallel mesh optimisation/adaptivity procedure presented in this paper is of general applicability. We illustrate this by applying it to a transient CFD (computational fluid dynamics) problem. Incompressible flow past a cylinder at moderate Reynolds numbers is modelled to demonstrate that the mesh can follow transient flow features. (authors)

  13. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    1997-10-01

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  14. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  15. Finite element model updating in structural dynamics using design sensitivity and optimisation

    OpenAIRE

    Calvi, Adriano

    1998-01-01

    Model updating is an important issue in engineering. In fact a well-correlated model provides for accurate evaluation of the structure loads and responses. The main objectives of the study were to exploit available optimisation programs to create an error localisation and updating procedure of nite element models that minimises the "error" between experimental and analytical modal data, addressing in particular the updating of large scale nite element models with se...

  16. Electricity market equilibrium model with resource constraint and transmission congestion

    Energy Technology Data Exchange (ETDEWEB)

    Gao, F. [ABB, Inc., Santa Clara, CA 95050 (United States); Sheble, G.B. [Portland State University, Portland, OR 97207 (United States)

    2010-01-15

    Electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models and many efforts have been made on it before. However, most past research focused on a single-period, single-market model and did not address the fact that GENCOs hold a portfolio of assets in both electricity and fuel markets. This paper first identifies a proper SFE model, which can be applied to a multiple-period situation. Then the paper develops the equilibrium condition using discrete time optimal control considering fuel resource constraints. Finally, the paper discusses the issues of multiple equilibria caused by transmission network and shows that a transmission constrained equilibrium may exist, however the shadow price may not be zero. Additionally, an advantage from the proposed model for merchant transmission planning is discussed. (author)

  17. Electricity market equilibrium model with resource constraint and transmission congestion

    International Nuclear Information System (INIS)

    Gao, F.; Sheble, G.B.

    2010-01-01

    Electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models and many efforts have been made on it before. However, most past research focused on a single-period, single-market model and did not address the fact that GENCOs hold a portfolio of assets in both electricity and fuel markets. This paper first identifies a proper SFE model, which can be applied to a multiple-period situation. Then the paper develops the equilibrium condition using discrete time optimal control considering fuel resource constraints. Finally, the paper discusses the issues of multiple equilibria caused by transmission network and shows that a transmission constrained equilibrium may exist, however the shadow price may not be zero. Additionally, an advantage from the proposed model for merchant transmission planning is discussed. (author)

  18. Optimisation of load control

    International Nuclear Information System (INIS)

    Koponen, P.

    1998-01-01

    Electricity cannot be stored in large quantities. That is why the electricity supply and consumption are always almost equal in large power supply systems. If this balance were disturbed beyond stability, the system or a part of it would collapse until a new stable equilibrium is reached. The balance between supply and consumption is mainly maintained by controlling the power production, but also the electricity consumption or, in other words, the load is controlled. Controlling the load of the power supply system is important, if easily controllable power production capacity is limited. Temporary shortage of capacity causes high peaks in the energy price in the electricity market. Load control either reduces the electricity consumption during peak consumption and peak price or moves electricity consumption to some other time. The project Optimisation of Load Control is a part of the EDISON research program for distribution automation. The following areas were studied: Optimization of space heating and ventilation, when electricity price is time variable, load control model in power purchase optimization, optimization of direct load control sequences, interaction between load control optimization and power purchase optimization, literature on load control, optimization methods and field tests and response models of direct load control and the effects of the electricity market deregulation on load control. An overview of the main results is given in this chapter

  19. Optimisation of load control

    Energy Technology Data Exchange (ETDEWEB)

    Koponen, P [VTT Energy, Espoo (Finland)

    1998-08-01

    Electricity cannot be stored in large quantities. That is why the electricity supply and consumption are always almost equal in large power supply systems. If this balance were disturbed beyond stability, the system or a part of it would collapse until a new stable equilibrium is reached. The balance between supply and consumption is mainly maintained by controlling the power production, but also the electricity consumption or, in other words, the load is controlled. Controlling the load of the power supply system is important, if easily controllable power production capacity is limited. Temporary shortage of capacity causes high peaks in the energy price in the electricity market. Load control either reduces the electricity consumption during peak consumption and peak price or moves electricity consumption to some other time. The project Optimisation of Load Control is a part of the EDISON research program for distribution automation. The following areas were studied: Optimization of space heating and ventilation, when electricity price is time variable, load control model in power purchase optimization, optimization of direct load control sequences, interaction between load control optimization and power purchase optimization, literature on load control, optimization methods and field tests and response models of direct load control and the effects of the electricity market deregulation on load control. An overview of the main results is given in this chapter

  20. The lagRST Model: A Turbulence Model for Non-Equilibrium Flows

    Science.gov (United States)

    Lillard, Randolph P.; Oliver, A. Brandon; Olsen, Michael E.; Blaisdell, Gregory A.; Lyrintzis, Anastasios S.

    2011-01-01

    This study presents a new class of turbulence model designed for wall bounded, high Reynolds number flows with separation. The model addresses deficiencies seen in the modeling of nonequilibrium turbulent flows. These flows generally have variable adverse pressure gradients which cause the turbulent quantities to react at a finite rate to changes in the mean flow quantities. This "lag" in the response of the turbulent quantities can t be modeled by most standard turbulence models, which are designed to model equilibrium turbulent boundary layers. The model presented uses a standard 2-equation model as the baseline for turbulent equilibrium calculations, but adds transport equations to account directly for non-equilibrium effects in the Reynolds Stress Tensor (RST) that are seen in large pressure gradients involving shock waves and separation. Comparisons are made to several standard turbulence modeling validation cases, including an incompressible boundary layer (both neutral and adverse pressure gradients), an incompressible mixing layer and a transonic bump flow. In addition, a hypersonic Shock Wave Turbulent Boundary Layer Interaction with separation is assessed along with a transonic capsule flow. Results show a substantial improvement over the baseline models for transonic separated flows. The results are mixed for the SWTBLI flows assessed. Separation predictions are not as good as the baseline models, but the over prediction of the peak heat flux downstream of the reattachment shock that plagues many models is reduced.

  1. Feasibility of the use of optimisation techniques to calibrate the models used in a post-closure radiological assessment

    International Nuclear Information System (INIS)

    Laundy, R.S.

    1991-01-01

    This report addresses the feasibility of the use of optimisation techniques to calibrate the models developed for the impact assessment of a radioactive waste repository. The maximum likelihood method for improving parameter estimates is considered in detail, and non-linear optimisation techniques for finding solutions are reviewed. Applications are described for the calibration of groundwater flow, radionuclide transport and biosphere models. (author)

  2. A Comparison of the Computation Times of Thermal Equilibrium and Non-equilibrium Models of Droplet Field in a Two-Fluid Three-Field Model

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ik Kyu; Cho, Heong Kyu; Kim, Jong Tae; Yoon, Han Young; Jeong, Jae Jun

    2007-12-15

    A computational model for transient, 3 dimensional 2 phase flows was developed by using 'unstructured-FVM-based, non-staggered, semi-implicit numerical scheme' considering the thermally non-equilibrium droplets. The assumption of the thermally equilibrium between liquid and droplets of previous studies was not used any more, and three energy conservation equations for vapor, liquid, liquid droplets were set up. Thus, 9 conservation equations for mass, momentum, and energy were established to simulate 2 phase flows. In this report, the governing equations and a semi-implicit numerical sheme for a transient 1 dimensional 2 phase flows was described considering the thermally non-equilibrium between liquid and liquid droplets. The comparison with the previous model considering the thermally non-equilibrium between liquid and liquid droplets was also reported.

  3. Techno-Economic Models for Optimised Utilisation of Jatropha curcas Linnaeus under an Out-Grower Farming Scheme in Ghana

    Directory of Open Access Journals (Sweden)

    Isaac Osei

    2016-11-01

    Full Text Available Techno-economic models for optimised utilisation of jatropha oil under an out-grower farming scheme were developed based on different considerations for oil and by-product utilisation. Model 1: Out-grower scheme where oil is exported and press cake utilised for compost. Model 2: Out-grower scheme with six scenarios considered for the utilisation of oil and by-products. Linear programming models were developed based on outcomes of the models to optimise the use of the oil through profit maximisation. The findings revealed that Model 1 was financially viable from the processors’ perspective but not for the farmer at seed price of $0.07/kg. All scenarios considered under Model 2 were financially viable from the processors perspective but not for the farmer at seed price of $0.07/kg; however, at seed price of $0.085/kg, financial viability was achieved for both parties. Optimising the utilisation of the oil resulted in an annual maximum profit of $123,300.

  4. Feeder Type Optimisation for the Plain Flow Discharge Process of an Underground Hopper by Discrete Element Modelling

    OpenAIRE

    Jan Nečas; Jakub Hlosta; David Žurovec; Martin Žídek; Jiří Rozbroj; Jiří Zegzulka

    2017-01-01

    This paper describes optimisation of a conveyor from an underground hopper intended for a coal transfer station. The original solution was designed with a chain conveyor encountered operational problems that have limited its continuous operation. The Discrete Element Modeling (DEM) was chosen to optimise the transport. DEM simulations allow device design modifications directly in the 3D CAD model, and then the simulation makes it possible to evaluate whether the adjustment was successful. By...

  5. An exergy-based multi-objective optimisation model for energy retrofit strategies in non-domestic buildings

    International Nuclear Information System (INIS)

    García Kerdan, Iván; Raslan, Rokia; Ruyssevelt, Paul

    2016-01-01

    While the building sector has a significant thermodynamic improvement potential, exergy analysis has been shown to provide new insight for the optimisation of building energy systems. This paper presents an exergy-based multi-objective optimisation tool that aims to assess the impact of a diverse range of retrofit measures with a focus on non-domestic buildings. EnergyPlus was used as a dynamic calculation engine for first law analysis, while a Python add-on was developed to link dynamic exergy analysis and a Genetic Algorithm optimisation process with the aforementioned software. Two UK archetype case studies (an office and a primary school) were used to test the feasibility of the proposed framework. Different measures combinations based on retrofitting the envelope insulation levels and the application of different HVAC configurations were assessed. The objective functions in this study are annual energy use, occupants' thermal comfort, and total building exergy destructions. A large range of optimal solutions was achieved highlighting the framework capabilities. The model achieved improvements of 53% in annual energy use, 51% of exergy destructions and 66% of thermal comfort for the school building, and 50%, 33%, and 80% for the office building. This approach can be extended by using exergoeconomic optimisation. - Highlights: • Integration of dynamic exergy analysis into a retrofit-oriented simulation tool. • Two UK non-domestic building archetypes are used as case studies. • The model delivers non-dominated solutions based on energy, exergy and comfort. • Exergy destructions of ERMs are optimised using GA algorithms. • Strengths and limitations of the proposed exergy-based framework are discussed.

  6. Equilibrium Price Dispersion in a Matching Model with Divisible Money

    NARCIS (Netherlands)

    Kamiya, K.; Sato, T.

    2002-01-01

    The main purpose of this paper is to show that, for any given parameter values, an equilibrium with dispersed prices (two-price equilibrium) exists in a simple matching model with divisible money presented by Green and Zhou (1998).We also show that our two-price equilibrium is unique in certain

  7. Risk based test interval and maintenance optimisation - Application and uses

    International Nuclear Information System (INIS)

    Sparre, E.

    1999-10-01

    The project is part of an IAEA co-ordinated Research Project (CRP) on 'Development of Methodologies for Optimisation of Surveillance Testing and Maintenance of Safety Related Equipment at NPPs'. The purpose of the project is to investigate the sensitivity of the results obtained when performing risk based optimisation of the technical specifications. Previous projects have shown that complete LPSA models can be created and that these models allow optimisation of technical specifications. However, these optimisations did not include any in depth check of the result sensitivity with regards to methods, model completeness etc. Four different test intervals have been investigated in this study. Aside from an original, nominal, optimisation a set of sensitivity analyses has been performed and the results from these analyses have been compared to the original optimisation. The analyses indicate that the result of an optimisation is rather stable. However, it is not possible to draw any certain conclusions without performing a number of sensitivity analyses. Significant differences in the optimisation result were discovered when analysing an alternative configuration. Also deterministic uncertainties seem to affect the result of an optimisation largely. The sensitivity of failure data uncertainties is important to investigate in detail since the methodology is based on the assumption that the unavailability of a component is dependent on the length of the test interval

  8. Topology optimisation of natural convection problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe

    2014-01-01

    This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations...... coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences...... in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach...

  9. Selecting a climate model subset to optimise key ensemble properties

    Directory of Open Access Journals (Sweden)

    N. Herger

    2018-02-01

    Full Text Available End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  10. Selecting a climate model subset to optimise key ensemble properties

    Science.gov (United States)

    Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.

    2018-02-01

    End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  11. Optimisation in radiotherapy II: Programmed and inversion optimisation algorithms

    International Nuclear Information System (INIS)

    Ebert, M.

    1997-01-01

    This is the second article in a three part examination of optimisation in radiotherapy. The previous article established the bases of optimisation in radiotherapy, and the formulation of the optimisation problem. This paper outlines several algorithms that have been used in radiotherapy, for searching for the best irradiation strategy within the full set of possible strategies. Two principle classes of algorithm are considered - those associated with mathematical programming which employ specific search techniques, linear programming type searches or artificial intelligence - and those which seek to perform a numerical inversion of the optimisation problem, finishing with deterministic iterative inversion. (author)

  12. Modeling of two-phase flow with thermal and mechanical non-equilibrium

    International Nuclear Information System (INIS)

    Houdayer, G.; Pinet, B.; Le Coq, G.; Reocreux, M.; Rousseau, J.C.

    1977-01-01

    To improve two-phase flow modeling by taking into account thermal and mechanical non-equilibrium a joint effort on analytical experiment and physical modeling has been undertaken. A model describing thermal non-equilibrium effects is first presented. A correlation of mass transfer has been developed using steam water critical flow tests. This model has been used to predict in a satisfactory manner blowdown tests. It has been incorporated in CLYSTERE system code. To take into account mechanical non-equilibrium, a six equations model is written. To get information on the momentum transfers special nitrogen-water tests have been undertaken. The first results of these studies are presented

  13. Multicriteria Optimisation in Logistics Forwarder Activities

    Directory of Open Access Journals (Sweden)

    Tanja Poletan Jugović

    2007-05-01

    Full Text Available Logistics forwarder, as organizer and planner of coordinationand integration of all the transport and logistics chains elements,uses adequate ways and methods in the process of planningand decision-making. One of these methods, analysed inthis paper, which could be used in optimisation of transportand logistics processes and activities of logistics forwarder, isthe multicriteria optimisation method. Using that method, inthis paper is suggested model of multicriteria optimisation of logisticsforwarder activities. The suggested model of optimisationis justified in keeping with method principles of multicriteriaoptimization, which is included in operation researchmethods and it represents the process of multicriteria optimizationof variants. Among many different processes of multicriteriaoptimization, PROMETHEE (Preference Ranking OrganizationMethod for Enrichment Evaluations and Promcalc& Gaia V. 3.2., computer program of multicriteria programming,which is based on the mentioned process, were used.

  14. Models of supply function equilibrium with applications to the electricity industry

    Science.gov (United States)

    Aromi, J. Daniel

    Electricity market design requires tools that result in a better understanding of incentives of generators and consumers. Chapter 1 and 2 provide tools and applications of these tools to analyze incentive problems in electricity markets. In chapter 1, models of supply function equilibrium (SFE) with asymmetric bidders are studied. I prove the existence and uniqueness of equilibrium in an asymmetric SFE model. In addition, I propose a simple algorithm to calculate numerically the unique equilibrium. As an application, a model of investment decisions is considered that uses the asymmetric SFE as an input. In this model, firms can invest in different technologies, each characterized by distinct variable and fixed costs. In chapter 2, option contracts are introduced to a supply function equilibrium (SFE) model. The uniqueness of the equilibrium in the spot market is established. Comparative statics results on the effect of option contracts on the equilibrium price are presented. A multi-stage game where option contracts are traded before the spot market stage is considered. When contracts are optimally procured by a central authority, the selected profile of option contracts is such that the spot market price equals marginal cost for any load level resulting in a significant reduction in cost. If load serving entities (LSEs) are price takers, in equilibrium, there is no trade of option contracts. Even when LSEs have market power, the central authority's solution cannot be implemented in equilibrium. In chapter 3, we consider a game in which a buyer must repeatedly procure an input from a set of firms. In our model, the buyer is able to sign long term contracts that establish the likelihood with which the next period contract is awarded to an entrant or the incumbent. We find that the buyer finds it optimal to favor the incumbent, this generates more intense competition between suppliers. In a two period model we are able to completely characterize the optimal mechanism.

  15. The rational expectations equilibrium inventory model theory and applications

    CERN Document Server

    1989-01-01

    This volume consists of six essays that develop and/or apply "rational expectations equilibrium inventory models" to study the time series behavior of production, sales, prices, and inventories at the industry level. By "rational expectations equilibrium inventory model" I mean the extension of the inventory model of Holt, Modigliani, Muth, and Simon (1960) to account for: (i) discounting, (ii) infinite horizon planning, (iii) observed and unobserved by the "econometrician" stochastic shocks in the production, factor adjustment, storage, and backorders management processes of firms, as well as in the demand they face for their products; and (iv) rational expectations. As is well known according to the Holt et al. model firms hold inventories in order to: (a) smooth production, (b) smooth production changes, and (c) avoid stockouts. Following the work of Zabel (1972), Maccini (1976), Reagan (1982), and Reagan and Weitzman (1982), Blinder (1982) laid the foundations of the rational expectations equilibrium inve...

  16. Topology Optimisation for Coupled Convection Problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe

    This thesis deals with topology optimisation for coupled convection problems. The aim is to extend and apply topology optimisation to steady-state conjugate heat transfer problems, where the heat conduction equation governs the heat transfer in a solid and is coupled to thermal transport...... in a surrounding uid, governed by a convection-diffusion equation, where the convective velocity field is found from solving the isothermal incompressible steady-state Navier-Stokes equations. Topology optimisation is also applied to steady-state natural convection problems. The modelling is done using stabilised...... finite elements, the formulation and implementation of which was done partly during a special course as prepatory work for this thesis. The formulation is extended with a Brinkman friction term in order to facilitate the topology optimisation of fluid flow and convective cooling problems. The derived...

  17. Generation of safe optimised execution strategies for uml models

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Herbert-Hansen, Zaza Nadja Lee

    When designing safety critical systems there is a need for verification of safety properties while ensuring system operations have a specific performance profile. We present a novel application of model checking to derive execution strategies, sequences of decisions at workflow branch points...... which optimise a set of reward variables, while simultaneously observing constraints which encode any required safety properties and accounting for the underlying stochastic nature of the system. By evaluating quantitative properties of the generated adversaries we are able to construct an execution...

  18. Thermochemical equilibrium modelling of a gasifying process

    International Nuclear Information System (INIS)

    Melgar, Andres; Perez, Juan F.; Laget, Hannes; Horillo, Alfonso

    2007-01-01

    This article discusses a mathematical model for the thermochemical processes in a downdraft biomass gasifier. The model combines the chemical equilibrium and the thermodynamic equilibrium of the global reaction, predicting the final composition of the producer gas as well as its reaction temperature. Once the composition of the producer gas is obtained, a range of parameters can be derived, such as the cold gas efficiency of the gasifier, the amount of dissociated water in the process and the heating value and engine fuel quality of the gas. The model has been validated experimentally. This work includes a parametric study of the influence of the gasifying relative fuel/air ratio and the moisture content of the biomass on the characteristics of the process and the producer gas composition. The model helps to predict the behaviour of different biomass types and is a useful tool for optimizing the design and operation of downdraft biomass gasifiers

  19. Non-equilibrium scaling analysis of the Kondo model with voltage bias

    International Nuclear Information System (INIS)

    Fritsch, Peter; Kehrein, Stefan

    2009-01-01

    The quintessential description of Kondo physics in equilibrium is obtained within a scaling picture that shows the buildup of Kondo screening at low temperature. For the non-equilibrium Kondo model with a voltage bias, the key new feature are decoherence effects due to the current across the impurity. In the present paper, we show how one can develop a consistent framework for studying the non-equilibrium Kondo model within a scaling picture of infinitesimal unitary transformations (flow equations). Decoherence effects appear naturally in third order of the β-function and dominate the Hamiltonian flow for sufficiently large voltage bias. We work out the spin dynamics in non-equilibrium and compare it with finite temperature equilibrium results. In particular, we report on the behavior of the static spin susceptibility including leading logarithmic corrections and compare it with the celebrated equilibrium result as a function of temperature.

  20. An Optimisation Approach for Room Acoustics Design

    DEFF Research Database (Denmark)

    Holm-Jørgensen, Kristian; Kirkegaard, Poul Henning; Andersen, Lars

    2005-01-01

    This paper discuss on a conceptual level the value of optimisation techniques in architectural acoustics room design from a practical point of view. It is chosen to optimise one objective room acoustics design criterium estimated from the sound field inside the room. The sound field is modeled...... using the boundary element method where absorption is incorporated. An example is given where the geometry of a room is defined by four design modes. The room geometry is optimised to get a uniform sound pressure....

  1. A joint spare part and maintenance inspection optimisation model using the Delay-Time concept

    International Nuclear Information System (INIS)

    Wang Wenbin

    2011-01-01

    Spare parts and maintenance are closely related logistics activities where maintenance generates the need for spare parts. When preventive maintenance is present, it may need more spare parts at one time because of the planned preventive maintenance activities. This paper considers the joint optimisation of three decision variables, e.g., the ordering quantity, ordering interval and inspection interval. The model is constructed using the well-known Delay-Time concept where the failure process is divided into a two-stage process. The objective function is the long run expected cost per unit time in terms of the three decision variables to be optimised. Here we use a block-based inspection policy where all components are inspected at the same time regardless of the ages of the components. This creates a situation that the time to failure since the immediate previous inspection is random and has to be modelled by a distribution. This time is called the forward time and a limiting but closed form of such distribution is obtained. We develop an algorithm for the optimal solution of the decision process using a combination of analytical and enumeration approaches. The model is demonstrated by a numerical example. - Highlights: → Joint optimisation of maintenance and spare part inventory. → The use of the Delay-Time concept. → Block-based inspection. → Fixed order interval but variable order quantity.

  2. Choking flow modeling with mechanical and thermal non-equilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, H.J.; Ishii, M.; Revankar, S.T. [School of Nuclear Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2006-01-15

    The mechanistic model, which considers the mechanical and thermal non-equilibrium, is described for two-phase choking flow. The choking mass flux is obtained from the momentum equation with the definition of choking. The key parameter for the mechanical non-equilibrium is a slip ratio. The dependent parameters for the slip ratio are identified. In this research, the slip ratio which is defined in the drift flux model is used to identify the impact parameters on the slip ratio. Because the slip ratio in the drift flux model is related to the distribution parameter and drift velocity, the adequate correlations depending on the flow regime are introduced in this study. For the thermal non-equilibrium, the model is developed with bubble conduction time and Bernoulli choking model. In case of highly subcooled water compared to the inlet pressure, the Bernoulli choking model using the pressure undershoot is used because there is no bubble generation in the test section. When the phase change happens inside the test section, two-phase choking model with relaxation time calculates the choking mass flux. According to the comparison of model prediction with experimental data shows good agreement. The developed model shows good prediction in both low and high pressure ranges. (author)

  3. A methodological approach to the design of optimising control strategies for sewer systems

    DEFF Research Database (Denmark)

    Mollerup, Ane Loft; Mikkelsen, Peter Steen; Sin, Gürkan

    2016-01-01

    This study focuses on designing an optimisation based control for sewer system in a methodological way and linking itto a regulatory control. Optimisation based design is found to depend on proper choice of a model, formulation of objective function and tuning of optimisation parameters. Accordin......This study focuses on designing an optimisation based control for sewer system in a methodological way and linking itto a regulatory control. Optimisation based design is found to depend on proper choice of a model, formulation of objective function and tuning of optimisation parameters....... Accordingly, two novel optimisation configurations are developed, where the optimisation either acts on the actuators or acts on the regulatory control layer. These two optimisation designs are evaluated on a sub-catchment of the sewer system in Copenhagen, and found to perform better than the existing...

  4. Plasma equilibrium response modelling and validation on JT-60U

    International Nuclear Information System (INIS)

    Lister, J.B.; Sharma, A.; Limebeer, D.J.N.; Wainwright, J.P.; Nakamura, Y.; Yoshino, R.

    2002-01-01

    A systematic procedure to identify the plasma equilibrium response to the poloidal field coil voltages has been applied to the JT-60U tokamak. The required response was predicted with a high accuracy by a state-space model derived from first principles. The ab initio derivation of linearized plasma equilibrium response models is re-examined using an approach standard in analytical mechanics. A symmetric formulation is naturally obtained, removing a previous weakness in such models. RZIP, a rigid current distribution model, is re-derived using this approach and is compared with the new experimental plasma equilibrium response data obtained from Ohmic and neutral beam injection discharges in the JT-60U tokamak. In order to remove any bias from the comparison between modelled and measured plasma responses, the electromagnetic response model without plasma was first carefully tuned against experimental data, using a parametric approach, for which different cost functions for quantifying model agreement were explored. This approach additionally provides new indications of the accuracy to which various plasma parameters are known, and to the ordering of physical effects. Having taken these precautions when tuning the plasmaless model, an empirical estimate of the plasma self-inductance, the plasma resistance and its radial derivative could be established and compared with initial assumptions. Off-line tuning of the JT-60U controller is presented as an example of the improvements which might be obtained by using such a model of the plasma equilibrium response. (author)

  5. Two-temperature chemically non-equilibrium modelling of transferred arcs

    International Nuclear Information System (INIS)

    Baeva, M; Kozakov, R; Gorchakov, S; Uhrlandt, D

    2012-01-01

    A two-temperature chemically non-equilibrium model describing in a self-consistent manner the heat transfer, the plasma chemistry, the electric and magnetic field in a high-current free-burning arc in argon has been developed. The model is aimed at unifying the description of a thermionic tungsten cathode, a flat copper anode, and the arc plasma including the electrode sheath regions. The heat transfer in the electrodes is coupled to the plasma heat transfer considering the energy fluxes onto the electrode boundaries with the plasma. The results of the non-equilibrium model for an arc current of 200 A and an argon flow rate of 12 slpm are presented along with results obtained from a model based on the assumption of local thermodynamic equilibrium (LTE) and from optical emission spectroscopy. The plasma shows a near-LTE behaviour along the arc axis and in a region surrounding the axis which becomes wider towards the anode. In the near-electrode regions, a large deviation from LTE is observed. The results are in good agreement with experimental findings from optical emission spectroscopy. (paper)

  6. Learning of Chemical Equilibrium through Modelling-Based Teaching

    Science.gov (United States)

    Maia, Poliana Flavia; Justi, Rosaria

    2009-01-01

    This paper presents and discusses students' learning process of chemical equilibrium from a modelling-based approach developed from the use of the "Model of Modelling" diagram. The investigation was conducted in a regular classroom (students 14-15 years old) and aimed at discussing how modelling-based teaching can contribute to students…

  7. Two-temperature chemically non-equilibrium modelling of an air supersonic ICP

    Energy Technology Data Exchange (ETDEWEB)

    El Morsli, Mbark; Proulx, Pierre [Laboratoire de Modelisation de Procedes Chimiques par Ordinateur Oppus, Departement de Genie Chimique, Universite de Sherbrooke (Ciheam) J1K 2R1 (Canada)

    2007-08-21

    In this work, a non-equilibrium mathematical model for an air inductively coupled plasma torch with a supersonic nozzle is developed without making thermal and chemical equilibrium assumptions. Reaction rate equations are written, and two coupled energy equations are used, one for the calculation of the translational-rotational temperature T{sub hr} and one for the calculation of the electro-vibrational temperature T{sub ev}. The viscous dissipation is taken into account in the translational-rotational energy equation. The electro-vibrational energy equation also includes the pressure work of the electrons, the Ohmic heating power and the exchange due to elastic collision. Higher order approximations of the Chapman-Enskog method are used to obtain better accuracy for transport properties, taking advantage of the most recent sets of collisions integrals available in the literature. The results obtained are compared with those obtained using a chemical equilibrium model and a one-temperature chemical non-equilibrium model. The influence of the power and the pressure chamber on the chemical and thermal non-equilibrium is investigated.

  8. A dissipative model of plasma equilibrium in toroidal systems

    International Nuclear Information System (INIS)

    Wobig, H.

    1985-10-01

    In order to describe a steady-state plasma equilibrium in tokamaks, stellarators or other non-axisymmetric configurations, the model of ideal MHD with isotropic plasma pressure is widely used. The ideal MHD - model of a toroidal plasma equilibrium requires the existence of closed magnetic surfaces. Several numerical codes have been developed in the past to solve the three-dimensional equilibrium problem, but so far no existence theorem for a solution has been proved. Another difficulty is the formation of magnetic islands and field line ergodisation, which can only be described in terms of ideal MHD if the plasma pressure is constant in the ergodic region. In order to describe the formation of magnetic islands and ergodisation of surfaces properly, additional dissipative terms have to be incorporated to allow decoupling of the plasma and magnetic field. In a collisional plasma viscosity and inelastic collisions introduce such dissipative processes. In the model used a friction term proportional to the velocity v vector of the plasma is included. Such a term originates from charge exchange interaction of the plasma with a nuetral background. With these modifications, the equilibrium problem reduces to a set of quasilinear elliptic equations for the pressure, the electric potential and the magnetic field. The paper deals with an existence theorem based on the Fixed - Point method of Schauder. It can be shown that a self-consistent and unique equilibrium exists if the friction term is large and the plasma pressure is sufficiently low. The essential role of the dissipative terms is to remove the singularities of the ideal MHD model on rational magnetic surfaces. The problem has a strong similarity to Benard cell convection, and consequently similar behaviour such as bifurcation and exchange of stability are expected. (orig./GG)

  9. Improving firm performance in out-of-equilibrium, deregulated markets using feedback simulation models

    International Nuclear Information System (INIS)

    Gary, S.; Larsen, E.R.

    2000-01-01

    Deregulation has reshaped the utility sector in many countries around the world. Organisations in these deregulated industries must adopt new polices which guide strategic decisions, in an uncertain and unfamiliar environment, that determine the short- and long-term fate of their companies. Traditional economic equilibrium models do not adequately address the issues facing these organisations in the shift towards deregulated market competition. Equilibrium assumptions break down in the out-of-equilibrium transition to competitive markets, and therefore different underpinning assumptions must be adopted in order to guide management in these periods. Simulation models incorporating information feedback through behavioural policies fill the void left by equilibrium models and support strategic policy analysis in out-of-equilibrium markets. As an example, we present a feedback simulation model developed to examine firm and industry level performance consequences of new generation capacity investment policies in the deregulated UK electricity sector. The model explicitly captures behavioural decision polices of boundedly rational managers and avoids equilibrium assumptions. Such models are essential to help managers evaluate the performance impact of various strategic policies in environments in which disequilibrum behaviour dominates. (Author)

  10. Power law-based local search in spider monkey optimisation for lower order system modelling

    Science.gov (United States)

    Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala

    2017-01-01

    The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.

  11. Soils apart from equilibrium – consequences for soil carbon balance modelling

    Directory of Open Access Journals (Sweden)

    T. Wutzler

    2007-01-01

    Full Text Available Many projections of the soil carbon sink or source are based on kinetically defined carbon pool models. Para-meters of these models are often determined in a way that the steady state of the model matches observed carbon stocks. The underlying simplifying assumption is that observed carbon stocks are near equilibrium. This assumption is challenged by observations of very old soils that do still accumulate carbon. In this modelling study we explored the consequences of the case where soils are apart from equilibrium. Calculation of equilibrium states of soils that are currently accumulating small amounts of carbon were performed using the Yasso model. It was found that already very small current accumulation rates cause big changes in theoretical equilibrium stocks, which can virtually approach infinity. We conclude that soils that have been disturbed several centuries ago are not in equilibrium but in a transient state because of the slowly ongoing accumulation of the slowest pool. A first consequence is that model calibrations to current carbon stocks that assume equilibrium state, overestimate the decay rate of the slowest pool. A second consequence is that spin-up runs (simulations until equilibrium overestimate stocks of recently disturbed sites. In order to account for these consequences, we propose a transient correction. This correction prescribes a lower decay rate of the slowest pool and accounts for disturbances in the past by decreasing the spin-up-run predicted stocks to match an independent estimate of current soil carbon stocks. Application of this transient correction at a Central European beech forest site with a typical disturbance history resulted in an additional carbon fixation of 5.7±1.5 tC/ha within 100 years. Carbon storage capacity of disturbed forest soils is potentially much higher than currently assumed. Simulations that do not adequately account for the transient state of soil carbon stocks neglect a considerable

  12. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction

    Directory of Open Access Journals (Sweden)

    Cobbs Gary

    2012-08-01

    Full Text Available Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Results Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the

  13. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.

    Science.gov (United States)

    Cobbs, Gary

    2012-08-16

    Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of

  14. Numerical equilibrium analysis for structured consumer resource models

    NARCIS (Netherlands)

    de Roos, A.M.; Diekmann, O.; Getto, P.; Kirkilionis, M.A.

    2010-01-01

    In this paper, we present methods for a numerical equilibrium and stability analysis for models of a size structured population competing for an unstructured re- source. We concentrate on cases where two model parameters are free, and thus existence boundaries for equilibria and stability boundaries

  15. Numerical equilibrium analysis for structured consumer resource models

    NARCIS (Netherlands)

    de Roos, A.M.; Diekmann, O.; Getto, P.; Kirkilionis, M.A.

    2010-01-01

    In this paper, we present methods for a numerical equilibrium and stability analysis for models of a size structured population competing for an unstructured resource. We concentrate on cases where two model parameters are free, and thus existence boundaries for equilibria and stability boundaries

  16. Spectral non-equilibrium property in homogeneous isotropic turbulence and its implication in subgrid-scale modeling

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Le [Laboratory of Mathematics and Physics, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Zhu, Ying [Laboratory of Mathematics and Physics, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Liu, Yangwei, E-mail: liuyangwei@126.com [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Lu, Lipeng [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China)

    2015-10-09

    The non-equilibrium property in turbulence is a non-negligible problem in large-eddy simulation but has not yet been systematically considered. The generalization from equilibrium turbulence to non-equilibrium turbulence requires a clear recognition of the non-equilibrium property. As a preliminary step of this recognition, the present letter defines a typical non-equilibrium process, that is, the spectral non-equilibrium process, in homogeneous isotropic turbulence. It is then theoretically investigated by employing the skewness of grid-scale velocity gradient, which permits the decomposition of resolved velocity field into an equilibrium one and a time-reversed one. Based on this decomposition, an improved Smagorinsky model is proposed to correct the non-equilibrium behavior of the traditional Smagorinsky model. The present study is expected to shed light on the future studies of more generalized non-equilibrium turbulent flows. - Highlights: • A spectral non-equilibrium process in isotropic turbulence is defined theoretically. • A decomposition method is proposed to divide a non-equilibrium turbulence field. • An improved Smagorinsky model is proposed to correct the non-equilibrium behavior.

  17. Modelling of an homogeneous equilibrium mixture model

    International Nuclear Information System (INIS)

    Bernard-Champmartin, A.; Poujade, O.; Mathiaud, J.; Mathiaud, J.; Ghidaglia, J.M.

    2014-01-01

    We present here a model for two phase flows which is simpler than the 6-equations models (with two densities, two velocities, two temperatures) but more accurate than the standard mixture models with 4 equations (with two densities, one velocity and one temperature). We are interested in the case when the two-phases have been interacting long enough for the drag force to be small but still not negligible. The so-called Homogeneous Equilibrium Mixture Model (HEM) that we present is dealing with both mixture and relative quantities, allowing in particular to follow both a mixture velocity and a relative velocity. This relative velocity is not tracked by a conservation law but by a closure law (drift relation), whose expression is related to the drag force terms of the two-phase flow. After the derivation of the model, a stability analysis and numerical experiments are presented. (authors)

  18. BGK-type models in strong reaction and kinetic chemical equilibrium regimes

    International Nuclear Information System (INIS)

    Monaco, R; Bianchi, M Pandolfi; Soares, A J

    2005-01-01

    A BGK-type procedure is applied to multi-component gases undergoing chemical reactions of bimolecular type. The relaxation process towards local Maxwellians, depending on mass and numerical densities of each species as well as common velocity and temperature, is investigated in two different cases with respect to chemical regimes. These cases are related to the strong reaction regime characterized by slow reactions, and to the kinetic chemical equilibrium regime where fast reactions take place. The consistency properties of both models are stated in detail. The trend to equilibrium is numerically tested and comparisons for the two regimes are performed within the hydrogen-air and carbon-oxygen reaction mechanism. In the spatial homogeneous case, it is also shown that the thermodynamical equilibrium of the models recovers satisfactorily the asymptotic equilibrium solutions to the reactive Euler equations

  19. Equilibrium and kinetic models for colloid release under transient solution chemistry conditions

    Science.gov (United States)

    We present continuum models to describe colloid release in the subsurface during transient physicochemical conditions. Our modeling approach relates the amount of colloid release to changes in the fraction of the solid surface area that contributes to retention. Equilibrium, kinetic, equilibrium and...

  20. Optimising resolution for a preparative separation of Chinese herbal medicine using a surrogate model sample system.

    Science.gov (United States)

    Ye, Haoyu; Ignatova, Svetlana; Peng, Aihua; Chen, Lijuan; Sutherland, Ian

    2009-06-26

    This paper builds on previous modelling research with short single layer columns to develop rapid methods for optimising high-performance counter-current chromatography at constant stationary phase retention. Benzyl alcohol and p-cresol are used as model compounds to rapidly optimise first flow and then rotational speed operating conditions at a preparative scale with long columns for a given phase system using a Dynamic Extractions Midi-DE centrifuge. The transfer to a high value extract such as the crude ethanol extract of Chinese herbal medicine Millettia pachycarpa Benth. is then demonstrated and validated using the same phase system. The results show that constant stationary phase modelling of flow and speed with long multilayer columns works well as a cheap, quick and effective method of optimising operating conditions for the chosen phase system-hexane-ethyl acetate-methanol-water (1:0.8:1:0.6, v/v). Optimum conditions for resolution were a flow of 20 ml/min and speed of 1200 rpm, but for throughput were 80 ml/min at the same speed. The results show that 80 ml/min gave the best throughputs for tephrosin (518 mg/h), pyranoisoflavone (47.2 mg/h) and dehydrodeguelin (10.4 mg/h), whereas for deguelin (100.5 mg/h), the best flow rate was 40 ml/min.

  1. MODELLING AND OPTIMISATION OF A BIMORPH PIEZOELECTRIC CANTILEVER BEAM IN AN ENERGY HARVESTING APPLICATION

    Directory of Open Access Journals (Sweden)

    CHUNG KET THEIN

    2016-02-01

    Full Text Available Piezoelectric materials are excellent transducers in converting vibrational energy into electrical energy, and vibration-based piezoelectric generators are seen as an enabling technology for wireless sensor networks, especially in selfpowered devices. This paper proposes an alternative method for predicting the power output of a bimorph cantilever beam using a finite element method for both static and dynamic frequency analyses. Experiments are performed to validate the model and the simulation results. In addition, a novel approach is presented for optimising the structure of the bimorph cantilever beam, by which the power output is maximised and the structural volume is minimised simultaneously. Finally, the results of the optimised design are presented and compared with other designs.

  2. Advanced CANDU reactors fuel analysis through optimal fuel management at approach to refuelling equilibrium

    International Nuclear Information System (INIS)

    Tingle, C.P.; Bonin, H.W.

    1999-01-01

    The analysis of alternate CANDU fuels along with natural uranium-based fuel was carried out from the view point of optimal in-core fuel management at approach to refuelling equilibrium. The alternate fuels considered in the present work include thorium containing oxide mixtures (MOX), plutonium-based MOX, and Pressurised Water Reactor (PWR) spent fuel recycled in CANDU reactors (Direct Use of spent PWR fuel in CANDU (DUPIC)); these are compared with the usual natural UO 2 fuel. The focus of the study is on the 'Approach to Refuelling Equilibrium' period which immediately follows the initial commissioning of the reactor. The in-core fuel management problem for this period is treated as an optimization problem in which the objective function is the refuelling frequency to be minimized by adjusting the following decision variables: the channel to be refuelled next, the time of the refuelling and the number of fresh fuel bundles to be inserted in the channel. Several constraints are also included in the optimisation problem which is solved using Perturbation Theory. Both the present 37-rod CANDU fuel bundle and the proposed CANFLEX bundle designs are part of this study. The results include the time to reach refuelling equilibrium from initial start-up of the reactor, the average discharge burnup, the average refuelling frequency and the average channel and bundle powers relative to natural UO 2 . The model was initially tested and the average discharge burnup for natural UO 2 came within 2% of the industry accepted 199 MWh/kgHE. For this type of fuel, the optimization exercise predicted the savings of 43 bundles per full power year. In addition to producing average discharge burnups and other parameters for the advanced fuels investigated, the optimisation model also evidenced some problem areas like high power densities for fuels such as the DUPIC. Perturbation Theory has proven itself to be an accurate and valuable optimization tool in predicting the time between

  3. hydroPSO: A Versatile Particle Swarm Optimisation R Package for Calibration of Environmental Models

    Science.gov (United States)

    Zambrano-Bigiarini, M.; Rojas, R.

    2012-04-01

    Particle Swarm Optimisation (PSO) is a recent and powerful population-based stochastic optimisation technique inspired by social behaviour of bird flocking, which shares similarities with other evolutionary techniques such as Genetic Algorithms (GA). In PSO, however, each individual of the population, known as particle in PSO terminology, adjusts its flying trajectory on the multi-dimensional search-space according to its own experience (best-known personal position) and the one of its neighbours in the swarm (best-known local position). PSO has recently received a surge of attention given its flexibility, ease of programming, low memory and CPU requirements, and efficiency. Despite these advantages, PSO may still get trapped into sub-optimal solutions, suffer from swarm explosion or premature convergence. Thus, the development of enhancements to the "canonical" PSO is an active area of research. To date, several modifications to the canonical PSO have been proposed in the literature, resulting into a large and dispersed collection of codes and algorithms which might well be used for similar if not identical purposes. In this work we present hydroPSO, a platform-independent R package implementing several enhancements to the canonical PSO that we consider of utmost importance to bring this technique to the attention of a broader community of scientists and practitioners. hydroPSO is model-independent, allowing the user to interface any model code with the calibration engine without having to invest considerable effort in customizing PSO to a new calibration problem. Some of the controlling options to fine-tune hydroPSO are: four alternative topologies, several types of inertia weight, time-variant acceleration coefficients, time-variant maximum velocity, regrouping of particles when premature convergence is detected, different types of boundary conditions and many others. Additionally, hydroPSO implements recent PSO variants such as: Improved Particle Swarm

  4. Optimisation of Software-Defined Networks Performance Using a Hybrid Intelligent System

    Directory of Open Access Journals (Sweden)

    Ann Sabih

    2017-06-01

    Full Text Available This paper proposes a novel intelligent technique that has been designed to optimise the performance of Software Defined Networks (SDN. The proposed hybrid intelligent system has employed integration of intelligence-based optimisation approaches with the artificial neural network. These heuristic optimisation methods include Genetic Algorithms (GA and Particle Swarm Optimisation (PSO. These methods were utilised separately in order to select the best inputs to maximise SDN performance. In order to identify SDN behaviour, the neural network model is trained and applied. The maximal optimisation approach has been identified using an analytical approach that considered SDN performance and the computational time as objective functions. Initially, the general model of the neural network was tested with unseen data before implementing the model using GA and PSO to determine the optimal performance of SDN. The results showed that the SDN represented by Artificial Neural Network ANN, and optmised by PSO, generated a better configuration with regards to computational efficiency and performance index.

  5. Geochemical modelling of groundwater evolution using chemical equilibrium codes

    International Nuclear Information System (INIS)

    Pitkaenen, P.; Pirhonen, V.

    1991-01-01

    Geochemical equilibrium codes are a modern tool in studying interaction between groundwater and solid phases. The most common used programs and application subjects are shortly presented in this article. The main emphasis is laid on the approach method of using calculated results in evaluating groundwater evolution in hydrogeological system. At present in geochemical equilibrium modelling also kinetic as well as hydrologic constrains along a flow path are taken into consideration

  6. Agent-Based Decision Control—How to Appreciate Multivariate Optimisation in Architecture

    DEFF Research Database (Denmark)

    Negendahl, Kristoffer; Perkov, Thomas Holmer; Kolarik, Jakub

    2015-01-01

    , the method is applied to a multivariate optimisation problem. The aim is specifically to demonstrate optimisation for entire building energy consumption, daylight distribution and capital cost. Based on the demonstrations Moth’s ability to find local minima is discussed. It is concluded that agent-based...... in the early design stage. The main focus is to demonstrate the optimisation method, which is done in two ways. Firstly, the newly developed agent-based optimisation algorithm named Moth is tested on three different single objective search spaces. Here Moth is compared to two evolutionary algorithms. Secondly...... optimisation algorithms like Moth open up for new uses of optimisation in the early design stage. With Moth the final outcome is less dependent on pre- and post-processing, and Moth allows user intervention during optimisation. Therefore, agent-based models for optimisation such as Moth can be a powerful...

  7. Modelling non-equilibrium thermodynamic systems from the speed-gradient principle.

    Science.gov (United States)

    Khantuleva, Tatiana A; Shalymov, Dmitry S

    2017-03-06

    The application of the speed-gradient (SG) principle to the non-equilibrium distribution systems far away from thermodynamic equilibrium is investigated. The options for applying the SG principle to describe the non-equilibrium transport processes in real-world environments are discussed. Investigation of a non-equilibrium system's evolution at different scale levels via the SG principle allows for a fresh look at the thermodynamics problems associated with the behaviour of the system entropy. Generalized dynamic equations for finite and infinite number of constraints are proposed. It is shown that the stationary solution to the equations, resulting from the SG principle, entirely coincides with the locally equilibrium distribution function obtained by Zubarev. A new approach to describe time evolution of systems far from equilibrium is proposed based on application of the SG principle at the intermediate scale level of the system's internal structure. The problem of the high-rate shear flow of viscous fluid near the rigid plane plate is discussed. It is shown that the SG principle allows closed mathematical models of non-equilibrium processes to be constructed.This article is part of the themed issue 'Horizons of cybernetical physics'. © 2017 The Author(s).

  8. Termination of Dynamic Contracts in an Equilibrium Labor Market Model

    OpenAIRE

    Wang, Cheng

    2005-01-01

    I construct an equilibrium model of the labor market where workers and firms enter into dyamic contracts that can potentially last forever, but are subject to optimal terminations. Upon a termination, the firm hires a new worker, and the worker who is terminated receives a termination compensation from the firm and is then free to go back to the labor market to seek new employment opportunities and enter into new dynamic contracts. The model permits only two types of equilibrium terminations ...

  9. General Equilibrium Models: Improving the Microeconomics Classroom

    Science.gov (United States)

    Nicholson, Walter; Westhoff, Frank

    2009-01-01

    General equilibrium models now play important roles in many fields of economics including tax policy, environmental regulation, international trade, and economic development. The intermediate microeconomics classroom has not kept pace with these trends, however. Microeconomics textbooks primarily focus on the insights that can be drawn from the…

  10. Comparing two non-equilibrium approaches to modelling of a free-burning arc

    International Nuclear Information System (INIS)

    Baeva, M; Uhrlandt, D; Benilov, M S; Cunha, M D

    2013-01-01

    Two models of high-pressure arc discharges are compared with each other and with experimental data for an atmospheric-pressure free-burning arc in argon for arc currents of 20–200 A. The models account for space-charge effects and thermal and ionization non-equilibrium in somewhat different ways. One model considers space-charge effects, thermal and ionization non-equilibrium in the near-cathode region and thermal non-equilibrium in the bulk plasma. The other model considers thermal and ionization non-equilibrium in the entire arc plasma and space-charge effects in the near-cathode region. Both models are capable of predicting the arc voltage in fair agreement with experimental data. Differences are observed in the arc attachment to the cathode, which do not strongly affect the near-cathode voltage drop and the total arc voltage for arc currents exceeding 75 A. For lower arc currents the difference is significant but the arc column structure is quite similar and the predicted bulk plasma characteristics are relatively close to each other. (paper)

  11. Equilibrium and transient conductivity for gadolium-doped ceria under large perturbations: II. Modeling

    DEFF Research Database (Denmark)

    Zhu, Huayang; Ricote, Sandrine; Coors, W. Grover

    2014-01-01

    the computational implementation of a Nernst–Planck–Poisson (NPP) model to represent and interpret conductivity-relaxation measurements. Defect surface chemistry is represented with both equilibrium and finite-rate kinetic models. The experiments and the models are capable of representing relaxations from strongly......A model-based approach is used to interpret equilibrium and transient conductivity measurements for 10% gadolinium-doped ceria: Ce0.9Gd0.1O1.95 − δ (GDC10). The measurements were carried out by AC impedance spectroscopy on slender extruded GDC10 rods. Although equilibrium conductivity measurements...... provide sufficient information from which to derive material properties, it is found that uniquely establishing properties is difficult. Augmenting equilibrium measurements with conductivity relaxation significantly improves the evaluation of needed physical properties. This paper develops and applies...

  12. Equilibrium Droplets on Deformable Substrates: Equilibrium Conditions.

    Science.gov (United States)

    Koursari, Nektaria; Ahmed, Gulraiz; Starov, Victor M

    2018-05-15

    Equilibrium conditions of droplets on deformable substrates are investigated, and it is proven using Jacobi's sufficient condition that the obtained solutions really provide equilibrium profiles of both the droplet and the deformed support. At the equilibrium, the excess free energy of the system should have a minimum value, which means that both necessary and sufficient conditions of the minimum should be fulfilled. Only in this case, the obtained profiles provide the minimum of the excess free energy. The necessary condition of the equilibrium means that the first variation of the excess free energy should vanish, and the second variation should be positive. Unfortunately, the mentioned two conditions are not the proof that the obtained profiles correspond to the minimum of the excess free energy and they could not be. It is necessary to check whether the sufficient condition of the equilibrium (Jacobi's condition) is satisfied. To the best of our knowledge Jacobi's condition has never been verified for any already published equilibrium profiles of both the droplet and the deformable substrate. A simple model of the equilibrium droplet on the deformable substrate is considered, and it is shown that the deduced profiles of the equilibrium droplet and deformable substrate satisfy the Jacobi's condition, that is, really provide the minimum to the excess free energy of the system. To simplify calculations, a simplified linear disjoining/conjoining pressure isotherm is adopted for the calculations. It is shown that both necessary and sufficient conditions for equilibrium are satisfied. For the first time, validity of the Jacobi's condition is verified. The latter proves that the developed model really provides (i) the minimum of the excess free energy of the system droplet/deformable substrate and (ii) equilibrium profiles of both the droplet and the deformable substrate.

  13. Fitting Equilibrium Search Models to Labour Market Data

    DEFF Research Database (Denmark)

    Bowlus, Audra J.; Kiefer, Nicholas M.; Neumann, George R.

    1996-01-01

    Specification and estimation of a Burdett-Mortensen type equilibrium search model is considered. The estimation is nonstandard. An estimation strategy asymptotically equivalent to maximum likelihood is proposed and applied. The results indicate that specifications with a small number of productiv...... of productivity types fit the data well compared to the homogeneous model....

  14. Phenomenological model for non-equilibrium deuteron emission in nucleon induced reactions

    International Nuclear Information System (INIS)

    Broeders, C.H.M.; Konobeyev, A.Yu.

    2005-01-01

    A new approach is proposed for the calculation of non-equilibrium deuteron energy distributions in nuclear reactions induced by nucleons of intermediate energies. It combines the model of the nucleon pick-up, the coalescence and the deuteron knock-out. Emission and absorption rates for excited particles are described by the pre-equilibrium hybrid model. The model of Sato, Iwamoto, Harada is used to describe the nucleon pick-up and the coalescence of nucleons from the exciton configurations starting from (2p, 1h). The model of deuteron knock-out is formulated taking into account the Pauli principle for the nucleon-deuteron interaction inside a nucleus. The contribution of the direct nucleon pick-up is described phenomenologically. The multiple pre-equilibrium emission of particles is taken into account. The calculated deuteron energy distributions are compared with experimental data from 12 C to 209 Bi. (orig.)

  15. Chemical equilibrium relations used in the fireball model of relativistic heavy ion reactions

    International Nuclear Information System (INIS)

    Gupta, S.D.

    1978-01-01

    The fireball model of relativistic heavy-ion collision uses chemical equilibrium relations to predict cross sections for particle and composite productions. These relations are examined in a canonical ensemble model where chemical equilibrium is not explicitly invoked

  16. Insights: Simple Models for Teaching Equilibrium and Le Chatelier's Principle.

    Science.gov (United States)

    Russell, Joan M.

    1988-01-01

    Presents three models that have been effective for teaching chemical equilibrium and Le Chatelier's principle: (1) the liquid transfer model, (2) the fish model, and (3) the teeter-totter model. Explains each model and its relation to Le Chatelier's principle. (MVL)

  17. Research on spot power market equilibrium model considering the electric power network characteristics

    International Nuclear Information System (INIS)

    Wang, Chengmin; Jiang, Chuanwen; Chen, Qiming

    2007-01-01

    Equilibrium is the optimum operational condition for the power market by economics rule. A realistic spot power market cannot achieve the equilibrium condition due to network losses and congestions. The impact of the network losses and congestion on spot power market is analyzed in this paper in order to establish a new equilibrium model considering the network loss and transmission constraints. The OPF problem formulated according to the new equilibrium model is solved by means of the equal price principle. A case study on the IEEE-30-bus system is provided in order to prove the effectiveness of the proposed approach. (author)

  18. Non-equilibrium mass transfer absorption model for the design of boron isotopes chemical exchange column

    International Nuclear Information System (INIS)

    Bai, Peng; Fan, Kaigong; Guo, Xianghai; Zhang, Haocui

    2016-01-01

    Highlights: • We propose a non-equilibrium mass transfer absorption model instead of a distillation equilibrium model to calculate boron isotopes separation. • We apply the model to calculate the needed column height to meet prescribed separation requirements. - Abstract: To interpret the phenomenon of chemical exchange in boron isotopes separation accurately, the process is specified as an absorption–reaction–desorption hybrid process instead of a distillation equilibrium model, the non-equilibrium mass transfer absorption model is put forward and a mass transfer enhancement factor E is introduced to find the packing height needed to meet the specified separation requirements with MATLAB.

  19. Normal tissue dose-effect models in biological dose optimisation

    International Nuclear Information System (INIS)

    Alber, M.

    2008-01-01

    Sophisticated radiotherapy techniques like intensity modulated radiotherapy with photons and protons rely on numerical dose optimisation. The evaluation of normal tissue dose distributions that deviate significantly from the common clinical routine and also the mathematical expression of desirable properties of a dose distribution is difficult. In essence, a dose evaluation model for normal tissues has to express the tissue specific volume effect. A formalism of local dose effect measures is presented, which can be applied to serial and parallel responding tissues as well as target volumes and physical dose penalties. These models allow a transparent description of the volume effect and an efficient control over the optimum dose distribution. They can be linked to normal tissue complication probability models and the equivalent uniform dose concept. In clinical applications, they provide a means to standardize normal tissue doses in the face of inevitable anatomical differences between patients and a vastly increased freedom to shape the dose, without being overly limiting like sets of dose-volume constraints. (orig.)

  20. A two-temperature chemical non-equilibrium modeling of DC arc plasma

    International Nuclear Information System (INIS)

    Qian Haiyang; Wu Bin

    2011-01-01

    To a better understanding of non-equilibrium characteristics of DC arc plasma,a two-dimensional axisymmetric two-temperature chemical non-equilibrium (2T-NCE) model is applied for direct current arc argon plasma generator with water-cooled constrictor at atmospheric pressure. The results show that the electron temperature and heavy particle temperature has a relationship under different working parameters, indicating that DC arc plasma has a strong non-equilibrium characteristic, and the variation is obvious. (authors)

  1. Distributed optimisation problem with communication delay and external disturbance

    Science.gov (United States)

    Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu

    2017-12-01

    This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.

  2. Optimisation models for decision support in the development of biomass-based industrial district-heating networks in Italy

    International Nuclear Information System (INIS)

    Chinese, Damiana; Meneghetti, Antonella

    2005-01-01

    A system optimisation approach is proposed to design biomass-based district-heating networks in the context of industrial districts, which are one of the main successful productive aspects of Italian industry. Two different perspectives are taken into account, that of utilities and of policy makers, leading to two optimisation models to be further integrated. A mixed integer linear-programming model is developed for a utility company's profit maximisation, while a linear-programming model aims at minimising the balance of greenhouse-gas emissions related to the proposed energy system and the avoided emissions due to the substitution of current fossil-fuel boilers with district-heating connections. To systematically compare their results, a sensitivity analysis is performed with respect to network size in order to identify how the optimal system configuration, in terms of selected boilers to be connected to a multiple energy-source network, may vary in the two cases and to detect possible optimal sizes. Then a factorial analysis is adopted to rank desirable client types under the two perspectives and identify proper marketing strategies. The proposed optimisation approach was applied to the design of a new district-heating network in the chair-manufacturing district of North-Eastern Italy. (Author)

  3. Profile control studies for JET optimised shear regime

    Energy Technology Data Exchange (ETDEWEB)

    Litaudon, X.; Becoulet, A.; Eriksson, L.G.; Fuchs, V.; Huysmans, G.; How, J.; Moreau, D.; Rochard, F.; Tresset, G.; Zwingmann, W. [Association Euratom-CEA, CEA/Cadarache, Dept. de Recherches sur la Fusion Controlee, DRFC, 13 - Saint-Paul-lez-Durance (France); Bayetti, P.; Joffrin, E.; Maget, P.; Mayorat, M.L.; Mazon, D.; Sarazin, Y. [JET Abingdon, Oxfordshire (United Kingdom); Voitsekhovitch, I. [Universite de Provence, LPIIM, Aix-Marseille 1, 13 (France)

    2000-03-01

    This report summarises the profile control studies, i.e. preparation and analysis of JET Optimised Shear plasmas, carried out during the year 1999 within the framework of the Task-Agreement (RF/CEA/02) between JET and the Association Euratom-CEA/Cadarache. We report on our participation in the preparation of the JET Optimised Shear experiments together with their comprehensive analyses and the modelling. Emphasis is put on the various aspects of pressure profile control (core and edge pressure) together with detailed studies of current profile control by non-inductive means, in the prospects of achieving steady, high performance, Optimised Shear plasmas. (authors)

  4. Equilibrium configuration for a high current pumped divertor

    International Nuclear Information System (INIS)

    Lazzaro, E.; Keegan, B.

    1989-01-01

    A realistic design of a pumped divertor plasma configuration to be fitted to the JET vessel can be obtained as a compromise among various geometrical, physical and technical constraints. The possibility of reaching a satisfactory solution has been analysed for plasmas up to 6 MA. Optimisation of the plasma coupling to the RF antennae requires a largely asymmetric distribution of ampere turns in the PF coils and some mechanical flexibility. The calculations presented were carried out using the specially developed JET equilibrium and configuration analysis codes. (U.K.)

  5. Pre-equilibrium assumptions and statistical model parameters effects on reaction cross-section calculations

    International Nuclear Information System (INIS)

    Avrigeanu, M.; Avrigeanu, V.

    1992-02-01

    A systematic study on effects of statistical model parameters and semi-classical pre-equilibrium emission models has been carried out for the (n,p) reactions on the 56 Fe and 60 Co target nuclei. The results obtained by using various assumptions within a given pre-equilibrium emission model differ among them more than the ones of different models used under similar conditions. The necessity of using realistic level density formulas is emphasized especially in connection with pre-equilibrium emission models (i.e. with the exciton state density expression), while a basic support could be found only by replacement of the Williams exciton state density formula with a realistic one. (author). 46 refs, 12 figs, 3 tabs

  6. Multi-objective evolutionary optimisation for product design and manufacturing

    CERN Document Server

    2011-01-01

    Presents state-of-the-art research in the area of multi-objective evolutionary optimisation for integrated product design and manufacturing Provides a comprehensive review of the literature Gives in-depth descriptions of recently developed innovative and novel methodologies, algorithms and systems in the area of modelling, simulation and optimisation

  7. Absence of local thermal equilibrium in two models of heat conduction

    OpenAIRE

    Dhar, Abhishek; Dhar, Deepak

    1998-01-01

    A crucial assumption in the conventional description of thermal conduction is the existence of local thermal equilibrium. We test this assumption in two simple models of heat conduction. Our first model is a linear chain of planar spins with nearest neighbour couplings, and the second model is that of a Lorentz gas. We look at the steady state of the system when the two ends are connected to heat baths at temperatures T1 and T2. If T1=T2, the system reaches thermal equilibrium. If T1 is not e...

  8. DACIA LOGAN LIVE AXLE OPTIMISATION USING COMPUTER GRAPHICS

    Directory of Open Access Journals (Sweden)

    KIRALY Andrei

    2017-05-01

    Full Text Available The paper presents some contributions to the calculus and optimisation of a live axle used at Dacia Logan using computer graphics software for creating the model and afterwards using FEA evaluation to determine the effectiveness of the optimisation. Thus using specialized computer software, a simulation is made and the results were compared to the measured real prototype.

  9. OPTIMISATION OF COMPRESSIVE STRENGTH OF PERIWINKLE ...

    African Journals Online (AJOL)

    In this paper, a regression model is developed to predict and optimise the compressive strength of periwinkle shell aggregate concrete using Scheffe's regression theory. The results obtained from the derived regression model agreed favourably with the experimental data. The model was tested for adequacy using a student ...

  10. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    Science.gov (United States)

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest

  11. Continuum model of non-equilibrium solvation and solvent effect on ultra-fast processes

    International Nuclear Information System (INIS)

    Li Xiangyuan; Fu Kexiang; Zhu Quan

    2006-01-01

    In the past 50 years, non-equilibrium solvation theory for ultra-fast processes such as electron transfer and light absorption/emission has attracted particular interest. A great deal of research efforts was made in this area and various models which give reasonable qualitative descriptions for such as solvent reorganization energy in electron transfer and spectral shift in solution, were developed within the framework of continuous medium theory. In a series of publications by the authors, we clarified that the expression of the non-equilibrium electrostatic free energy that is at the dominant position of non-equilibrium solvation and serves as the basis of various models, however, was incorrectly formulated. In this work, the authors argue that reversible charging work integration was inappropriately applied in the past to an irreversible path linking the equilibrium or the non-equilibrium state. Because the step from the equilibrium state to the nonequilibrium state is factually thermodynamically irreversible, the conventional expression for non-equilibrium free energy that was deduced in different ways is unreasonable. Here the authors derive the non-equilibrium free energy to a quite different form according to Jackson integral formula. Such a difference throws doubts to the models including the famous Marcus two-sphere model for solvent reorganization energy of electron transfer and the Lippert-Mataga equation for spectral shift. By introducing the concept of 'spring energy' arising from medium polarizations, the energy constitution of the non-equilibrium state is highlighted. For a solute-solvent system, the authors separate the total electrostatic energy into different components: the self-energies of solute charge and polarized charge, the interaction energy between them and the 'spring energy' of the solvent polarization. With detailed reasoning and derivation, our formula for non-equilibrium free energy can be reached through different ways. Based on the

  12. Mutual information-based LPI optimisation for radar network

    Science.gov (United States)

    Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun

    2015-07-01

    Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.

  13. Modelling Thomson scattering for systems with non-equilibrium electron distributions

    Directory of Open Access Journals (Sweden)

    Chapman D.A.

    2013-11-01

    Full Text Available We investigate the effect of non-equilibrium electron distributions in the analysis of Thomson scattering for a range of conditions of interest to inertial confinement fusion experiments. Firstly, a generalised one-component model based on quantum statistical theory is given in the random phase approximation (RPA. The Chihara expression for electron-ion plasmas is then adapted to include the new non-equilibrium electron physics. The theoretical scattering spectra for both diffuse and dense plasmas in which non-equilibrium electron distributions are expected to arise are considered. We find that such distributions strongly influence the spectra and are hence an important consideration for accurately determining the plasma conditions.

  14. Development of the hard and soft constraints based optimisation model for unit sizing of the hybrid renewable energy system designed for microgrid applications

    Science.gov (United States)

    Sundaramoorthy, Kumaravel

    2017-02-01

    The hybrid energy systems (HESs) based electricity generation system has become a more attractive solution for rural electrification nowadays. Economically feasible and technically reliable HESs are solidly based on an optimisation stage. This article discusses about the optimal unit sizing model with the objective function to minimise the total cost of the HES. Three typical rural sites from southern part of India have been selected for the application of the developed optimisation methodology. Feasibility studies and sensitivity analysis on the optimal HES are discussed elaborately in this article. A comparison has been carried out with the Hybrid Optimization Model for Electric Renewable optimisation model for three sites. The optimal HES is found with less total net present rate and rate of energy compared with the existing method

  15. Post-CHF heat transfer: a non-equilibrium, relaxation model

    International Nuclear Information System (INIS)

    Jones, O.C. Jr.; Zuber, N.

    1977-01-01

    Existing phenomenological models of heat transfer in the non-equilibrium, liquid-deficient, dispersed flow regime can sometimes predict the thermal behavior fairly well but are quite complex, requiring coupled simultaneous differential equations to describe the axial gradients of mass and energy along with those of droplet acceleration and size. In addition, empirical relations are required to express the droplet breakup and increased effective heat transfer due to holdup. This report describes the development of a different approach to the problem. It is shown that the non-equilibrium component of the total energy can be expressed as a first order, inhomogeneous relaxation equation in terms of one variable coefficient termed the Superheat Relaxation number. A demonstration is provided to show that this relaxation number can be correlated using local variables in such a manner to allow the single non-equilibrium equation to accurately calculate the effects of mass velocity and heat flux along with tube length, diameter, and critical quality for equilibrium qualities from 0.13 to over 3.0

  16. Multiobjective optimisation of bogie suspension to boost speed on curves

    Science.gov (United States)

    Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor

    2016-01-01

    To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.

  17. Knowledge Management through the Equilibrium Pattern Model for Learning

    Science.gov (United States)

    Sarirete, Akila; Noble, Elizabeth; Chikh, Azeddine

    Contemporary students are characterized by having very applied learning styles and methods of acquiring knowledge. This behavior is consistent with the constructivist models where students are co-partners in the learning process. In the present work the authors developed a new model of learning based on the constructivist theory coupled with the cognitive development theory of Piaget. The model considers the level of learning based on several stages and the move from one stage to another requires learners' challenge. At each time a new concept is introduced creates a disequilibrium that needs to be worked out to return back to its equilibrium stage. This process of "disequilibrium/equilibrium" has been analyzed and validated using a course in computer networking as part of Cisco Networking Academy Program at Effat College, a women college in Saudi Arabia. The model provides a theoretical foundation for teaching especially in a complex knowledge domain such as engineering and can be used in a knowledge economy.

  18. Beam position optimisation for IMRT

    International Nuclear Information System (INIS)

    Holloway, L.; Hoban, P.

    2001-01-01

    Full text: The introduction of IMRT has not generally resulted in the use of optimised beam positions because to find the global solution of the problem a time consuming stochastic optimisation method must be used. Although a deterministic method may not achieve the global minimum it should achieve a superior dose distribution compared to no optimisation. This study aimed to develop and test such a method. The beam optimisation method developed relies on an iterative process to achieve the desired number of beams from a large initial number of beams. The number of beams is reduced in a 'weeding-out' process based on the total fluence which each beam delivers. The process is gradual, with only three beams removed each time (following a small number of iterations), ensuring that the reduction in beams does not dramatically affect the fluence maps of those remaining. A comparison was made between the dose distributions achieved when the beams positions were optimised in this fashion and when the beams positions were evenly distributed. The method has been shown to work quite effectively and efficiently. The Figure shows a comparison in dose distribution with optimised and non optimised beam positions for 5 beams. It can be clearly seen that there is an improvement in the dose distribution delivered to the tumour and a reduction in the dose to the critical structure with beam position optimisation. A method for beam position optimisation for use in IMRT optimisations has been developed. This method although not necessarily achieving the global minimum in beam position still achieves quite a dramatic improvement compared with no beam position optimisation and is very efficiently achieved. Copyright (2001) Australasian College of Physical Scientists and Engineers in Medicine

  19. Dynamic Processes of Conceptual Change: Analysis of Constructing Mental Models of Chemical Equilibrium.

    Science.gov (United States)

    Chiu, Mei-Hung; Chou, Chin-Cheng; Liu, Chia-Ju

    2002-01-01

    Investigates students' mental models of chemical equilibrium using dynamic science assessments. Reports that students at various levels have misconceptions about chemical equilibrium. Involves 10th grade students (n=30) in the study doing a series of hands-on chemical experiments. Focuses on the process of constructing mental models, dynamic…

  20. Development of a model for optimisation of a power plant mix by means of evolution strategy; Modellentwicklung zur Kraftwerksparkoptimierung mit Hilfe von Evolutionsstrategien

    Energy Technology Data Exchange (ETDEWEB)

    Roth, Hans

    2008-09-17

    Within the scope of this thesis a model based on evolution strategy is depicted, which optimises the upgrade of an existing power plant mix. In doing so the optimisation problem is divided in two sections covering the building of new power plants as well as their ideal usage within the persisting power plant mix. The building of new power plants is optimised by means of mutations, while their ideal usage is specified by a heuristic classification according to the merit order of the power plant mix. By applying a residual yearly load curve the consumer load can be modelled, incorporating the impact of fluctuating power generation and its probability of occurrence. Power plant failures and the duration of revisions are adequately considered by means of a power reduction factor. The optimisation furthermore accommodates a limiting threshold for yearly carbon dioxide emissions as well as a premature decommissioning of power plants. (orig.)

  1. Ants Colony Optimisation of a Measuring Path of Prismatic Parts on a CMM

    Directory of Open Access Journals (Sweden)

    Stojadinovic Slavenko M.

    2016-03-01

    Full Text Available This paper presents optimisation of a measuring probe path in inspecting the prismatic parts on a CMM. The optimisation model is based on: (i the mathematical model that establishes an initial collision-free path presented by a set of points, and (ii the solution of Travelling Salesman Problem (TSP obtained with Ant Colony Optimisation (ACO. In order to solve TSP, an ACO algorithm that aims to find the shortest path of ant colony movement (i.e. the optimised path is applied. Then, the optimised path is compared with the measuring path obtained with online programming on CMM ZEISS UMM500 and with the measuring path obtained in the CMM inspection module of Pro/ENGINEER® software. The results of comparing the optimised path with the other two generated paths show that the optimised path is at least 20% shorter than the path obtained by on-line programming on CMM ZEISS UMM500, and at least 10% shorter than the path obtained by using the CMM module in Pro/ENGINEER®.

  2. Quantum Cournot equilibrium for the Hotelling–Smithies model of product choice

    International Nuclear Information System (INIS)

    Rahaman, Ramij; Majumdar, Priyadarshi; Basu, B

    2012-01-01

    This paper demonstrates the quantization of a spatial Cournot duopoly model with product choice, a two stage game focusing on non-cooperation in locations and quantities. With quantization, the players can access a continuous set of strategies, using a continuous variable quantum mechanical approach. The presence of quantum entanglement in the initial state identifies a quantity equilibrium for each location pair choice with any transport cost. Also higher profit is obtained by the firms at Nash equilibrium. Adoption of quantum strategies rewards us by the existence of a larger quantum strategic space at equilibrium. (paper)

  3. Advanced optimisation - coal fired power plant operations

    Energy Technology Data Exchange (ETDEWEB)

    Turney, D.M.; Mayes, I. [E.ON UK, Nottingham (United Kingdom)

    2005-03-01

    The purpose of this unit optimization project is to develop an integrated approach to unit optimisation and develop an overall optimiser that is able to resolve any conflicts between the individual optimisers. The individual optimisers have been considered during this project are: on-line thermal efficiency package, GNOCIS boiler optimiser, GNOCIS steam side optimiser, ESP optimisation, and intelligent sootblowing system. 6 refs., 7 figs., 3 tabs.

  4. A model for non-equilibrium, non-homogeneous two-phase critical flow

    International Nuclear Information System (INIS)

    Bassel, Wageeh Sidrak; Ting, Daniel Kao Sun

    1999-01-01

    Critical two phase flow is a very important phenomena in nuclear reactor technology for the analysis of loss of coolant accident. Several recent papers, Lee and Shrock (1990), Dagan (1993) and Downar (1996) , among others, treat the phenomena using complex models which require heuristic parameters such as relaxation constants or interfacial transfer models. In this paper a mathematical model for one dimensional non equilibrium and non homogeneous two phase flow in constant area duct is developed. The model is constituted of three conservation equations type mass ,momentum and energy. Two important variables are defined in the model: equilibrium constant in the energy equation and the impulse function in the momentum equation. In the energy equation, the enthalpy of the liquid phase is determined by a linear interpolation function between the liquid phase enthalpy at inlet condition and the saturated liquid enthalpy at local pressure. The interpolation coefficient is the equilibrium constant. The momentum equation is expressed in terms of the impulse function. It is considered that there is slip between the liquid and vapor phases, the liquid phase is in metastable state and the vapor phase is in saturated stable state. The model is not heuristic in nature and does not require complex interface transfer models. It is proved numerically that for the critical condition the partial derivative of two phase pressure drop with respect to the local pressure or to phase velocity must be zero.This criteria is demonstrated by numerical examples. The experimental work of Fauske (1962) and Jeandey (1982) were analyzed resulting in estimated numerical values for important parameters like slip ratio, equilibrium constant and two phase frictional drop. (author)

  5. Iterative optimisation of Monte Carlo detector models using measurements and simulations

    Energy Technology Data Exchange (ETDEWEB)

    Marzocchi, O., E-mail: olaf@marzocchi.net [European Patent Office, Rijswijk (Netherlands); Leone, D., E-mail: debora.leone@kit.edu [Institute for Nuclear Waste Disposal, Karlsruhe Institute of Technology, Karlsruhe (Germany)

    2015-04-11

    This work proposes a new technique to optimise the Monte Carlo models of radiation detectors, offering the advantage of a significantly lower user effort and therefore an improved work efficiency compared to the prior techniques. The method consists of four steps, two of which are iterative and suitable for automation using scripting languages. The four steps consist in the acquisition in the laboratory of measurement data to be used as reference; the modification of a previously available detector model; the simulation of a tentative model of the detector to obtain the coefficients of a set of linear equations; the solution of the system of equations and the update of the detector model. Steps three and four can be repeated for more accurate results. This method avoids the “try and fail” approach typical of the prior techniques.

  6. Non-equilibrium Economics

    Directory of Open Access Journals (Sweden)

    Katalin Martinás

    2007-02-01

    Full Text Available A microeconomic, agent based framework to dynamic economics is formulated in a materialist approach. An axiomatic foundation of a non-equilibrium microeconomics is outlined. Economic activity is modelled as transformation and transport of commodities (materials owned by the agents. Rate of transformations (production intensity, and the rate of transport (trade are defined by the agents. Economic decision rules are derived from the observed economic behaviour. The non-linear equations are solved numerically for a model economy. Numerical solutions for simple model economies suggest that the some of the results of general equilibrium economics are consequences only of the equilibrium hypothesis. We show that perfect competition of selfish agents does not guarantee the stability of economic equilibrium, but cooperativity is needed, too.

  7. Optimisation: how to develop stake holder involvement

    International Nuclear Information System (INIS)

    Weiss, W.

    2003-01-01

    The Precautionary Principle is an internationally recognised approach for dealing with risk situations characterised by uncertainties and potential irreversible damages. Since the late fifties, ICRP has adopted this prudent attitude because of the lack of scientific evidence concerning the existence of a threshold at low doses for stochastic effects. The 'linear, no-threshold' model and the 'optimisation of protection' principle have been developed as a pragmatic response for the management of the risk. The progress in epidemiology and radiobiology over the last decades have affirmed the initial assumption and the optimisation remains the appropriate response for the application of the precautionary principle in the context of radiological protection. The basic objective of optimisation is, for any source within the system of radiological protection, to maintain the level of exposure as low as reasonably achievable, taking into account social and economical factors. Methods tools and procedures have been developed over the last two decades to put into practice the optimisation principle with a central role given to the cost-benefit analysis as a means to determine the optimised level of protection. However, with the advancement in the implementation of the principle more emphasis was progressively given to good practice, as well as on the importance of controlling individual levels of exposure through the optimisation process. In the context of the revision of its present recommendations, the Commission is reenforcing the emphasis on protection of the individual with the adoption of an equity-based system that recognizes individual rights and a basic level of health protection. Another advancement is the role that is now recognised to 'stakeholders involvement' in the optimisation process as a mean to improve the quality of the decision aiding process for identifying and selecting protection actions considered as being accepted by all those involved. The paper

  8. Three-dimensional modelling and numerical optimisation of the W7-X ICRH antenna

    Energy Technology Data Exchange (ETDEWEB)

    Louche, F., E-mail: fabrice.louche@rma.ac.be [Laboratoire de physique des plasmas de l’ERM, Laboratorium voor plasmafysica van de KMS (LPP-ERM/KMS), Ecole Royale Militaire, Koninklijke Militaire School, Brussels (Belgium); Křivská, A.; Messiaen, A.; Ongena, J. [Laboratoire de physique des plasmas de l’ERM, Laboratorium voor plasmafysica van de KMS (LPP-ERM/KMS), Ecole Royale Militaire, Koninklijke Militaire School, Brussels (Belgium); Borsuk, V. [Institute of Energy and Climate Research – Plasma Physics, Forschungszentrum Juelich (Germany); Durodié, F.; Schweer, B. [Laboratoire de physique des plasmas de l’ERM, Laboratorium voor plasmafysica van de KMS (LPP-ERM/KMS), Ecole Royale Militaire, Koninklijke Militaire School, Brussels (Belgium)

    2015-10-15

    Highlights: • A simplified version of the ICRF antenna for the stellarator W7-X has been modelled with the 3D electromagnetic software Microwave Studio. This antenna can be tuned between 25 and 38 MHz with the help of adjustable capacitors. • In previous modellings the front of the antenna was modelled with the help of 3D codes, while the capacitors were modelled as lumped elements with a given DC capacitance. As this approach does not take into account the effect of the internal inductance, a MWS model of these capacitors has been developed. • The initial geometry does not permit the operation at 38 MHz. By modifying some geometrical parameters of the front face, it was possible to increase the frequency band of the antenna, and to increase (up to 25%) the maximum coupled power accounting for the technical constraints on the capacitors. • The W7-X ICRH antenna must be operated at 25 and 38 MHz, and for various toroidal phasings of the strap RF currents. Due to the considered duty cycle it is shown that thanks to a special procedure based on minimisation techniques, it is possible to define a satisfactory optimum geometry in agreement with the specifications of the capacitors. • The various steps of the optimisation are validated with TOPICA simulations. For a given density profile the RF power coupling expectancy can be precisely computed. - Abstract: Ion Cyclotron Resonance Heating (ICRH) is a promising heating and wall conditioning method considered for the W7-X stellarator and a dedicated ICRH antenna has been designed. This antenna must perform several tasks in a long term physics programme: fast particles generation, heating at high densities, current drive and ICRH physics studies. Various minority heating scenarios are considered and two frequency bands will be used. In the present work a design for the low frequency range (25–38 MHz) only is developed. The antenna is made of 2 straps with tap feeds and tuning capacitors with DC capacitance in

  9. Optimising a Model of Minimum Stock Level Control and a Model of Standing Order Cycle in Selected Foundry Plant

    Directory of Open Access Journals (Sweden)

    Szymszal J.

    2013-09-01

    Full Text Available It has been found that the area where one can look for significant reserves in the procurement logistics is a rational management of the stock of raw materials. Currently, the main purpose of projects which increase the efficiency of inventory management is to rationalise all the activities in this area, taking into account and minimising at the same time the total inventory costs. The paper presents a method for optimising the inventory level of raw materials under a foundry plant conditions using two different control models. The first model is based on the estimate of an optimal level of the minimum emergency stock of raw materials, giving information about the need for an order to be placed immediately and about the optimal size of consignments ordered after the minimum emergency level has occurred. The second model is based on the estimate of a maximum inventory level of raw materials and an optimal order cycle. Optimisation of the presented models has been based on the previously done selection and use of rational methods for forecasting the time series of the delivery of a chosen auxiliary material (ceramic filters to a casting plant, including forecasting a mean size of the delivered batch of products and its standard deviation.

  10. Gaussian random bridges and a geometric model for information equilibrium

    Science.gov (United States)

    Mengütürk, Levent Ali

    2018-03-01

    The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.

  11. Combining simulation and multi-objective optimisation for equipment quantity optimisation in container terminals

    OpenAIRE

    Lin, Zhougeng

    2013-01-01

    This thesis proposes a combination framework to integrate simulation and multi-objective optimisation (MOO) for container terminal equipment optimisation. It addresses how the strengths of simulation and multi-objective optimisation can be integrated to find high quality solutions for multiple objectives with low computational cost. Three structures for the combination framework are proposed respectively: pre-MOO structure, integrated MOO structure and post-MOO structure. The applications of ...

  12. Optimisation of the Laser Cutting Process

    DEFF Research Database (Denmark)

    Dragsted, Birgitte; Olsen, Flemmming Ove

    1996-01-01

    The problem in optimising the laser cutting process is outlined. Basic optimisation criteria and principles for adapting an optimisation method, the simplex method, are presented. The results of implementing a response function in the optimisation are discussed with respect to the quality as well...

  13. PHYSICAL-MATEMATICALSCIENCE MECHANICS SIMULATION CHALLENGES IN OPTIMISING THEORETICAL METAL CUTTING TASKS

    Directory of Open Access Journals (Sweden)

    Rasul V. Guseynov

    2017-01-01

    Full Text Available Abstract. Objectives In the article, problems in the optimising of machining operations, which provide end-unit production of the required quality with a minimum processing cost, are addressed. Methods Increasing the effectiveness of experimental research was achieved through the use of mathematical methods for planning experiments for optimising metal cutting tasks. The minimal processing cost model, in which the objective function is polynomial, is adopted as a criterion for the selection of optimal parameters. Results Polynomial models of the influence of angles φ, α, γ on the torque applied when cutting threads in various steels are constructed. Optimum values of the geometrical tool parameters were obtained using the criterion of minimum cutting forces during processing. The high stability of tools having optimal geometric parameters is determined. It is shown that the use of experimental planning methods allows the optimisation of cutting parameters. In optimising solutions to metal cutting problems, it is found to be expedient to use multifactor experimental planning methods and to select the cutting force as the optimisation parameter when determining tool geometry. Conclusion The joint use of geometric programming and experiment planning methods in order to optimise the parameters of cutting significantly increases the efficiency of technological metal processing approaches. 

  14. Dividend taxation in an infinite-horizon general equilibrium model

    OpenAIRE

    Pham, Ngoc-Sang

    2017-01-01

    We consider an infinite-horizon general equilibrium model with heterogeneous agents and financial market imperfections. We investigate the role of dividend taxation on economic growth and asset price. The optimal dividend taxation is also studied.

  15. Equilibrium and nonequilibrium attractors for a discrete, selection-migration model

    Science.gov (United States)

    James F. Selgrade; James H. Roberds

    2003-01-01

    This study presents a discrete-time model for the effects of selection and immigration on the demographic and genetic compositions of a population. Under biologically reasonable conditions, it is shown that the model always has an equilibrium. Although equilibria for similar models without migration must have real eigenvalues, for this selection-migration model we...

  16. Vapor-liquid equilibrium thermodynamics of N2 + CH4 - Model and Titan applications

    Science.gov (United States)

    Thompson, W. R.; Zollweg, John A.; Gabis, David H.

    1992-01-01

    A thermodynamic model is presented for vapor-liquid equilibrium in the N2 + CH4 system, which is implicated in calculations of the Titan tropospheric clouds' vapor-liquid equilibrium thermodynamics. This model imposes constraints on the consistency of experimental equilibrium data, and embodies temperature effects by encompassing enthalpy data; it readily calculates the saturation criteria, condensate composition, and latent heat for a given pressure-temperature profile of the Titan atmosphere. The N2 content of condensate is about half of that computed from Raoult's law, and about 30 percent greater than that computed from Henry's law.

  17. Discussions on the non-equilibrium effects in the quantitative phase field model of binary alloys

    International Nuclear Information System (INIS)

    Zhi-Jun, Wang; Jin-Cheng, Wang; Gen-Cang, Yang

    2010-01-01

    All the quantitative phase field models try to get rid of the artificial factors of solutal drag, interface diffusion and interface stretch in the diffuse interface. These artificial non-equilibrium effects due to the introducing of diffuse interface are analysed based on the thermodynamic status across the diffuse interface in the quantitative phase field model of binary alloys. Results indicate that the non-equilibrium effects are related to the negative driving force in the local region of solid side across the diffuse interface. The negative driving force results from the fact that the phase field model is derived from equilibrium condition but used to simulate the non-equilibrium solidification process. The interface thickness dependence of the non-equilibrium effects and its restriction on the large scale simulation are also discussed. (cross-disciplinary physics and related areas of science and technology)

  18. Computing diffusivities from particle models out of equilibrium

    Science.gov (United States)

    Embacher, Peter; Dirr, Nicolas; Zimmer, Johannes; Reina, Celia

    2018-04-01

    A new method is proposed to numerically extract the diffusivity of a (typically nonlinear) diffusion equation from underlying stochastic particle systems. The proposed strategy requires the system to be in local equilibrium and have Gaussian fluctuations but it is otherwise allowed to undergo arbitrary out-of-equilibrium evolutions. This could be potentially relevant for particle data obtained from experimental applications. The key idea underlying the method is that finite, yet large, particle systems formally obey stochastic partial differential equations of gradient flow type satisfying a fluctuation-dissipation relation. The strategy is here applied to three classic particle models, namely independent random walkers, a zero-range process and a symmetric simple exclusion process in one space dimension, to allow the comparison with analytic solutions.

  19. Once more on the equilibrium-point hypothesis (lambda model) for motor control.

    Science.gov (United States)

    Feldman, A G

    1986-03-01

    The equilibrium control hypothesis (lambda model) is considered with special reference to the following concepts: (a) the length-force invariant characteristic (IC) of the muscle together with central and reflex systems subserving its activity; (b) the tonic stretch reflex threshold (lambda) as an independent measure of central commands descending to alpha and gamma motoneurons; (c) the equilibrium point, defined in terms of lambda, IC and static load characteristics, which is associated with the notion that posture and movement are controlled by a single mechanism; and (d) the muscle activation area (a reformulation of the "size principle")--the area of kinematic and command variables in which a rank-ordered recruitment of motor units takes place. The model is used for the interpretation of various motor phenomena, particularly electromyographic patterns. The stretch reflex in the lambda model has no mechanism to follow-up a certain muscle length prescribed by central commands. Rather, its task is to bring the system to an equilibrium, load-dependent position. Another currently popular version defines the equilibrium point concept in terms of alpha motoneuron activity alone (the alpha model). Although the model imitates (as does the lambda model) spring-like properties of motor performance, it nevertheless is inconsistent with a substantial data base on intact motor control. An analysis of alpha models, including their treatment of motor performance in deafferented animals, reveals that they suffer from grave shortcomings. It is concluded that parameterization of the stretch reflex is a basis for intact motor control. Muscle deafferentation impairs this graceful mechanism though it does not remove the possibility of movement.

  20. Estimating Dynamic Equilibrium Models using Macro and Financial Data

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; Posch, Olaf; van der Wel, Michel

    We show that including financial market data at daily frequency, along with macro series at standard lower frequency, facilitates statistical inference on structural parameters in dynamic equilibrium models. Our continuous-time formulation conveniently accounts for the difference in observation...... of the estimators and estimate the model using 20 years of U.S. macro and financial data....

  1. Is neoclassical microeconomics formally valid? An approach based on an analogy with equilibrium thermodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Sousa, Tania; Domingos, Tiago [Environment and Energy Section, DEM, Instituto Superior Tecnico, Avenida Rovisco Pais, 1, 1049-001 Lisboa (Portugal)

    2006-06-10

    The relation between Thermodynamics and Economics is a paramount issue in Ecological Economics. Two different levels can be distinguished when discussing it: formal and substantive. At the formal level, a mathematical framework is used to describe both thermodynamic and economic systems. At the substantive level, thermodynamic laws are applied to economic processes. In Ecological Economics, there is a widespread claim that neoclassical economics has the same mathematical formulation as classical mechanics and is therefore fundamentally flawed because: (1) utility does not obey a conservation law as energy does; (2) an equilibrium theory cannot be used to study irreversible processes. Here, we show that neoclassical economics is based on a wrong formulation of classical mechanics, being in fact formally analogous to equilibrium thermodynamics. The similarity between both formalisms, namely that they are both cases of constrained optimisation, is easily perceived when thermodynamics is looked upon using the Tisza-Callen axiomatisation. In this paper, we take the formal analogy between equilibrium thermodynamics and economic systems far enough to answer the formal criticisms, proving that the formalism of neoclassical economics has irreversibility embedded in it. However, the formal similarity between equilibrium thermodynamics and neoclassical microeconomics does not mean that economic models are in accordance with mass, energy and entropy balance equations. In fact, neoclassical theory suffers from flaws in the substantive integration with thermodynamic laws as has already been fully demonstrated by valuable work done by ecological economists in this field. (author)

  2. Model-Free Trajectory Optimisation for Unmanned Aircraft Serving as Data Ferries for Widespread Sensors

    Directory of Open Access Journals (Sweden)

    Ben Pearre

    2012-10-01

    Full Text Available Given multiple widespread stationary data sources such as ground-based sensors, an unmanned aircraft can fly over the sensors and gather the data via a wireless link. Performance criteria for such a network may incorporate costs such as trajectory length for the aircraft or the energy required by the sensors for radio transmission. Planning is hampered by the complex vehicle and communication dynamics and by uncertainty in the locations of sensors, so we develop a technique based on model-free learning. We present a stochastic optimisation method that allows the data-ferrying aircraft to optimise data collection trajectories through an unknown environment in situ, obviating the need for system identification. We compare two trajectory representations, one that learns near-optimal trajectories at low data requirements but that fails at high requirements, and one that gives up some performance in exchange for a data collection guarantee. With either encoding the ferry is able to learn significantly improved trajectories compared with alternative heuristics. To demonstrate the versatility of the model-free learning approach, we also learn a policy to minimise the radio transmission energy required by the sensor nodes, allowing prolonged network lifetime.

  3. Model-based online optimisation. Pt. 1: active learning; Modellbasierte Online-Optimierung moderner Verbrennungsmotoren. T. 1: Aktives Lernen

    Energy Technology Data Exchange (ETDEWEB)

    Poland, J.; Knoedler, K.; Zell, A. [Tuebingen Univ. (Germany). Lehrstuhl fuer Rechnerarchitektur; Fleischhauer, T.; Mitterer, A.; Ullmann, S. [BMW Group (Germany)

    2003-05-01

    This two-part article presents the model-based optimisation algorithm ''mbminimize''. It was developed in a corporate project of the University Tuebingen and the BMW Group for the purpose of optimising internal combustion engines online on the engine test bed. The first part concentrates on the basic algorithmic design, as well as on modelling, experimental design and active learning. The second part will discuss strategies for dealing with limits such as knocking. (orig.) [German] Dieser zweiteilige Beitrag stellt den modellbasierten Optimierungsalgorithmus ''mbminimize'' vor, der in Kooperation von der Universitaet Tuebingen und der BMW Group fuer die Online-Optimierung von Verbrennungsmotoren entwickelt wurde. Der vorliegende erste Teil konzentriert sich auf das grundlegende algorithmische Design, auf Modellierung, Versuchsplanung und aktives Lernen. Der zweite Teil diskutiert Strategien zur Behandlung von Limits wie Motorklopfen.

  4. The restricted stochastic user equilibrium with threshold model: Large-scale application and parameter testing

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær; Nielsen, Otto Anker; Watling, David P.

    2017-01-01

    Equilibrium model (DUE), by combining the strengths of the Boundedly Rational User Equilibrium model and the Restricted Stochastic User Equilibrium model (RSUE). Thereby, the RSUET model reaches an equilibrated solution in which the flow is distributed according to Random Utility Theory among a consistently...... model improves the behavioural realism, especially for high congestion cases. Also, fast and well-behaved convergence to equilibrated solutions among non-universal choice sets is observed across different congestion levels, choice model scale parameters, and algorithm step sizes. Clearly, the results...... highlight that the RSUET outperforms the MNP SUE in terms of convergence, calculation time and behavioural realism. The choice set composition is validated by using 16,618 observed route choices collected by GPS devices in the same network and observing their reproduction within the equilibrated choice sets...

  5. Examples of equilibrium and non-equilibrium behavior in evolutionary systems

    Science.gov (United States)

    Soulier, Arne

    With this thesis, we want to shed some light into the darkness of our understanding of simply defined statistical mechanics systems and the surprisingly complex dynamical behavior they exhibit. We will do so by presenting in turn one equilibrium and then one non-equilibrium system with evolutionary dynamics. In part 1, we will present the seceder-model, a newly developed system that cannot equilibrate. We will then study several properties of the system and obtain an idea of the richness of the dynamics of the seceder model, which is particular impressive given the minimal amount of modeling necessary in its setup. In part 2, we will present extensions to the directed polymer in random media problem on a hypercube and its connection to the Eigen model of evolution. Our main interest will be the influence of time-dependent and time-independent changes in the fitness landscape viewed by an evolving population. This part contains the equilibrium dynamics. The stochastic models and the topic of evolution and non-equilibrium in general will allow us to point out similarities to the various lines of thought in game theory.

  6. Modeling equilibrium adsorption of organic micropollutants onto activated carbon

    KAUST Repository

    De Ridder, David J.; Villacorte, Loreen O.; Verliefde, Arne R. D.; Verberk, Jasper Q J C; Heijman, Bas G J; Amy, Gary L.; Van Dijk, Johannis C.

    2010-01-01

    to these properties occur in parallel, and their respective dominance depends on the solute properties as well as carbon characteristics. In this paper, a model based on multivariate linear regression is described that was developed to predict equilibrium carbon

  7. Modelling, simulation, and optimisation of a downflow entrained flow reactor for black liquor gasification

    Energy Technology Data Exchange (ETDEWEB)

    Marklund, Magnus [ETC Energitekniskt Centrum, Piteaa (Sweden)

    2003-12-01

    Black liquor, a by-product of the chemical pulping process, is an important liquid fuel in the pulp and paper industry. A potential technology for improving the recovery cycle of energy and chemicals contained in the liquid fuel is pressurised gasification of black liquor (PGLG). However, uncertainties about the reliability and robustness of the technology are preventing a large-scale market introduction. One important step towards a greater trust in the process reliability is the development of simulation tools that can provide a better understanding of the process and improve performance through optimisation. In the beginning of 2001 a project was initiated in order to develop a simulation tool for an entrained-flow gasifier in PBLG based on CFD (Computational Fluid Dynamics). The aim has been to provide an advanced tool for a better understanding of process performance, to help with trouble shooting in the development plant, and for use in optimisation of a full-scale commercial gasifier. Furthermore, the project will also provide quantitative information on burner functionality through advanced laser-optical measurements by use of a Phase Doppler Anemometer (PDA). To this point in current project, three different concept models have been developed. The work has been comprised in a thesis 'Modelling and Simulation of Pressurised Black Liquor Gasification at High Temperature' and presented at Luleaa Univ. of Technology in Oct 2003. The construction of an atmospheric burner test rig has also been initiated. The main objective with the rig will be to quantify the atomisation performance of suitable burner nozzles for a PBLG gasifier that can be used as input for the CFD model. The main conclusions from the modelling work done this far can be condensed to the following points: From the first modelling results it was concluded that a wide spray pattern is preferable with respect to demand for long residence times for black liquor droplets and a low amount

  8. Adaptive behaviour and multiple equilibrium states in a predator-prey model.

    Science.gov (United States)

    Pimenov, Alexander; Kelly, Thomas C; Korobeinikov, Andrei; O'Callaghan, Michael J A; Rachinskii, Dmitrii

    2015-05-01

    There is evidence that multiple stable equilibrium states are possible in real-life ecological systems. Phenomenological mathematical models which exhibit such properties can be constructed rather straightforwardly. For instance, for a predator-prey system this result can be achieved through the use of non-monotonic functional response for the predator. However, while formal formulation of such a model is not a problem, the biological justification for such functional responses and models is usually inconclusive. In this note, we explore a conjecture that a multitude of equilibrium states can be caused by an adaptation of animal behaviour to changes of environmental conditions. In order to verify this hypothesis, we consider a simple predator-prey model, which is a straightforward extension of the classic Lotka-Volterra predator-prey model. In this model, we made an intuitively transparent assumption that the prey can change a mode of behaviour in response to the pressure of predation, choosing either "safe" of "risky" (or "business as usual") behaviour. In order to avoid a situation where one of the modes gives an absolute advantage, we introduce the concept of the "cost of a policy" into the model. A simple conceptual two-dimensional predator-prey model, which is minimal with this property, and is not relying on odd functional responses, higher dimensionality or behaviour change for the predator, exhibits two stable co-existing equilibrium states with basins of attraction separated by a separatrix of a saddle point. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Credit price optimisation within retail banking

    African Journals Online (AJOL)

    2014-02-14

    Feb 14, 2014 ... cost based pricing, where the price of a product or service is based on the .... function obtained from fitting a logistic regression model .... Note that the proposed optimisation approach below will allow us to also incorporate.

  10. Adiabatic equilibrium models for direct containment heating

    International Nuclear Information System (INIS)

    Pilch, M.; Allen, M.D.

    1991-01-01

    Probabilistic risk assessment (PRA) studies are being extended to include a wider spectrum of reactor plants than was considered in NUREG-1150. There is a need for simple direct containment heating (DCH) models that can be used for screening studies aimed at identifying potentially significant contributors to overall risk in individual nuclear power plants. This paper presents two adiabatic equilibrium models suitable for the task. The first, a single-cell model, places a true upper bound on DCH loads. This upper bound, however, often far exceeds reasonable expectations of containment loads based on CONTAIN calculations and experiment observations. In this paper, a two cell model is developed that captures the major mitigating feature of containment compartmentalization, thus providing more reasonable estimates of the containment load

  11. Optimised operation of an off-grid hybrid wind-diesel-battery system using genetic algorithm

    International Nuclear Information System (INIS)

    Gan, Leong Kit; Shek, Jonathan K.H.; Mueller, Markus A.

    2016-01-01

    Highlights: • Diesel generator’s operation is optimised in a hybrid wind-diesel-battery system. • Optimisation is performed using wind speed and load demand forecasts. • The objective is to maximise wind energy utilisation with limited battery storage. • Physical modelling approach (Simscape) is used to verify mathematical model. • Sensitivity analyses are performed with synthesised wind and load forecast errors. - Abstract: In an off-grid hybrid wind-diesel-battery system, the diesel generator is often not utilised efficiently, therefore compromising its lifetime. In particular, the general rule of thumb of running the diesel generator at more than 40% of its rated capacity is often unmet. This is due to the variation in power demand and wind speed which needs to be supplied by the diesel generator. In addition, the frequent start-stop of the diesel generator leads to additional mechanical wear and fuel wastage. This research paper proposes a novel control algorithm which optimises the operation of a diesel generator, using genetic algorithm. With a given day-ahead forecast of local renewable energy resource and load demand, it is possible to optimise the operation of a diesel generator, subjected to other pre-defined constraints. Thus, the utilisation of the renewable energy sources to supply electricity can be maximised. Usually, the optimisation studies of a hybrid system are being conducted through simple analytical modelling, coupled with a selected optimisation algorithm to seek the optimised solution. The obtained solution is not verified using a more realistic system model, for instance the physical modelling approach. This often led to the question of the applicability of such optimised operation being used in reality. In order to take a step further, model-based design using Simulink is employed in this research to perform a comparison through a physical modelling approach. The Simulink model has the capability to incorporate the electrical

  12. Homogeneous non-equilibrium two-phase critical flow model

    International Nuclear Information System (INIS)

    Schroeder, J.J.; Vuxuan, N.

    1987-01-01

    An important aspect of nuclear and chemical reactor safety is the ability to predict the maximum or critical mass flow rate from a break or leak in a pipe system. At the beginning of such a blowdown, if the stagnation condition of the fluid is subcooled or slightly saturated thermodynamic non-equilibrium exists in the downstream, e.g. the fluid becomes superheated to a degree determined by the liquid pressure. A simplified non-equilibrium model, explained in this report, is valid for rapidly decreasing pressure along the flow path. It presumes that fluid has to be superheated by an amount governed by physical principles before it starts to flash into steam. The flow is assumed to be homogeneous, i.e. the steam and liquid velocities are equal. An adiabatic flow calculation mode (Fanno lines) is employed to evaluate the critical flow rate for long pipes. The model is found to satisfactorily describe critical flow tests. Good agreement is obtained with the large scale Marviken tests as well as with small scale experiments. (orig.)

  13. Pre-equilibrium nuclear reactions: An introduction to classical and quantum-mechanical models

    International Nuclear Information System (INIS)

    Koning, A.J.; Akkermans, J.M.

    1999-01-01

    In studies of light-ion induced nuclear reactions one distinguishes three different mechanisms: direct, compound and pre-equilibrium nuclear reactions. These reaction processes can be subdivided according to time scales or, equivalently, the number of intranuclear collisions taking place before emission. Furthermore, each mechanism preferably excites certain parts of the nuclear level spectrum and is characterized by different types of angular distributions. This presentation includes description of the classical, exciton model, semi-classical models, with some selected results, and quantum mechanical models. A survey of classical versus quantum-mechanical pre-equilibrium reaction theory is presented including practical applications

  14. A phase-field model for non-equilibrium solidification of intermetallics

    International Nuclear Information System (INIS)

    Assadi, H.

    2007-01-01

    Intermetallics may exhibit unique solidification behaviour-including slow growth kinetics, anomalous partitioning and formation of unusual growth morphologies-because of departure from local equilibrium. A phase-field model is developed and used to illustrate these non-equilibrium effects in solidification of a prototype B2 intermetallic phase. The model takes sublattice compositions as primary field variables, from which chemical long-range order is derived. The diffusive reactions between the two sublattices, and those between each sublattice and the liquid phase are taken as 'internal' kinetic processes, which take place within control volumes of the system. The model can thus capture solute and disorder trapping effects, which are consistent-over a wide range of the solid/liquid interface thickness-with the predictions of the sharp-interface theory of solute and disorder trapping. The present model can also take account of solid-state ordering and thus illustrate the effects of chemical ordering on microstructure formation and crystal growth kinetics

  15. An optimised portfolio management model, incorporating best practices

    OpenAIRE

    2015-01-01

    M.Ing. (Engineering Management) Driving sustainability, optimising return on investments and cultivating a competitive market advantage, are imperative for organisational success and growth. In order to achieve the business objectives and value proposition, effective management strategies must be efficiently implemented, monitored and controlled. Failure to do so ultimately result in; financial loss due to increased capital and operational expenditure, schedule slippages, substandard deliv...

  16. Modified Ammonia Removal Model Based on Equilibrium and Mass Transfer Principles

    International Nuclear Information System (INIS)

    Shanableh, A.; Imteaz, M.

    2010-01-01

    Yoon et al. 1 presented an approximate mathematical model to describe ammonia removal from an experimental batch reactor system with gaseous headspace. The development of the model was initially based on assuming instantaneous equilibrium between ammonia in the aqueous and gas phases. In the model, a 'saturation factor, β' was defined as a constant and used to check whether the equilibrium assumption was appropriate. The authors used the trends established by the estimated β values to conclude that the equilibrium assumption was not valid. The authors presented valuable experimental results obtained using a carefully designed system and the model used to analyze the results accounted for the following effects: speciation of ammonia between NH 3 and NH 4 + as a function of pH: temperature dependence of the reactions constants; and air flow rate. In this article, an alternative model based on the exact solution of the governing mass-balance differential equations was developed and used to describe ammonia removal without relying on the use of the saturation factor. The modified model was also extended to mathematically describe the pH dependence of the ammonia removal rate, in addition to accounting for the speciation of ammonia, temperature dependence of reactions constants, and air flow rate. The modified model was used to extend the analysis of the original experimental data presented by Yoon et al. 1 and the results matched the theory in an excellent manner

  17. An Equilibrium Model of User Generated Content

    OpenAIRE

    Dae-Yong Ahn; Jason A. Duan; Carl F. Mela

    2011-01-01

    This paper considers the joint creation and consumption of content on user generated content platforms (e.g., reviews or articles, chat, videos, etc.). On these platforms, users' utilities depend upon the participation of others; hence, users' expectations regarding the participation of others on the site becomes germane to their own involvement levels. Yet these beliefs are often assumed to be fixed. Accordingly, we develop a dynamic rational expectations equilibrium model of joint consumpti...

  18. Partition Function and Configurational Entropy in Non-Equilibrium States: A New Theoretical Model

    Directory of Open Access Journals (Sweden)

    Akira Takada

    2018-03-01

    Full Text Available A new model of non-equilibrium thermodynamic states has been investigated on the basis of the fact that all thermodynamic variables can be derived from partition functions. We have thus attempted to define partition functions for non-equilibrium conditions by introducing the concept of pseudo-temperature distributions. These pseudo-temperatures are configurational in origin and distinct from kinetic (phonon temperatures because they refer to the particular fragments of the system with specific energies. This definition allows thermodynamic states to be described either for equilibrium or non-equilibrium conditions. In addition; a new formulation of an extended canonical partition function; internal energy and entropy are derived from this new temperature definition. With this new model; computational experiments are performed on simple non-interacting systems to investigate cooling and two distinct relaxational effects in terms of the time profiles of the partition function; internal energy and configurational entropy.

  19. Mathematical models and equilibrium in irreversible microeconomics

    Directory of Open Access Journals (Sweden)

    Anatoly M. Tsirlin

    2010-07-01

    Full Text Available A set of equilibrium states in a system consisting of economic agents, economic reservoirs, and firms is considered. Methods of irreversible microeconomics are used. We show that direct sale/purchase leads to an equilibrium state which depends upon the coefficients of supply/demand functions. To reach the unique equilibrium state it is necessary to add either monetary exchange or an intermediate firm.

  20. NON-EQUILIBRIUM IONIZATION MODELING OF THE CURRENT SHEET IN A SIMULATED SOLAR ERUPTION

    International Nuclear Information System (INIS)

    Shen Chengcai; Reeves, Katharine K.; Raymond, John C.; Murphy, Nicholas A.; Ko, Yuan-Kuen; Lin Jun; Mikić, Zoran; Linker, Jon A.

    2013-01-01

    The current sheet that extends from the top of flare loops and connects to an associated flux rope is a common structure in models of coronal mass ejections (CMEs). To understand the observational properties of CME current sheets, we generated predictions from a flare/CME model to be compared with observations. We use a simulation of a large-scale CME current sheet previously reported by Reeves et al. This simulation includes ohmic and coronal heating, thermal conduction, and radiative cooling in the energy equation. Using the results of this simulation, we perform time-dependent ionization calculations of the flow in a CME current sheet and construct two-dimensional spatial distributions of ionic charge states for multiple chemical elements. We use the filter responses from the Atmospheric Imaging Assembly (AIA) on the Solar Dynamics Observatory and the predicted intensities of emission lines to compute the count rates for each of the AIA bands. The results show differences in the emission line intensities between equilibrium and non-equilibrium ionization. The current sheet plasma is underionized at low heights and overionized at large heights. At low heights in the current sheet, the intensities of the AIA 94 Å and 131 Å channels are lower for non-equilibrium ionization than for equilibrium ionization. At large heights, these intensities are higher for non-equilibrium ionization than for equilibrium ionization inside the current sheet. The assumption of ionization equilibrium would lead to a significant underestimate of the temperature low in the current sheet and overestimate at larger heights. We also calculate the intensities of ultraviolet lines and predict emission features to be compared with events from the Ultraviolet Coronagraph Spectrometer on the Solar and Heliospheric Observatory, including a low-intensity region around the current sheet corresponding to this model

  1. Validation of vibration-dissociation coupling models in hypersonic non-equilibrium separated flows

    Science.gov (United States)

    Shoev, G.; Oblapenko, G.; Kunova, O.; Mekhonoshina, M.; Kustova, E.

    2018-03-01

    The validation of recently developed models of vibration-dissociation coupling is discussed in application to numerical solutions of the Navier-Stokes equations in a two-temperature approximation for a binary N2/N flow. Vibrational-translational relaxation rates are computed using the Landau-Teller formula generalized for strongly non-equilibrium flows obtained in the framework of the Chapman-Enskog method. Dissociation rates are calculated using the modified Treanor-Marrone model taking into account the dependence of the model parameter on the vibrational state. The solutions are compared to those obtained using traditional Landau-Teller and Treanor-Marrone models, and it is shown that for high-enthalpy flows, the traditional and recently developed models can give significantly different results. The computed heat flux and pressure on the surface of a double cone are in a good agreement with experimental data available in the literature on low-enthalpy flow with strong thermal non-equilibrium. The computed heat flux on a double wedge qualitatively agrees with available data for high-enthalpy non-equilibrium flows. Different contributions to the heat flux calculated using rigorous kinetic theory methods are evaluated. Quantitative discrepancy of numerical and experimental data is discussed.

  2. Optimisation of Marine Boilers using Model-based Multivariable Control

    DEFF Research Database (Denmark)

    Solberg, Brian

    Traditionally, marine boilers have been controlled using classical single loop controllers. To optimise marine boiler performance, reduce new installation time and minimise the physical dimensions of these large steel constructions, a more comprehensive and coherent control strategy is needed....... This research deals with the application of advanced control to a specific class of marine boilers combining well-known design methods for multivariable systems. This thesis presents contributions for modelling and control of the one-pass smoke tube marine boilers as well as for hybrid systems control. Much...... of the focus has been directed towards water level control which is complicated by the nature of the disturbances acting on the system as well as by low frequency sensor noise. This focus was motivated by an estimated large potential to minimise the boiler geometry by reducing water level fluctuations...

  3. Radiative-convective equilibrium model intercomparison project

    Science.gov (United States)

    Wing, Allison A.; Reed, Kevin A.; Satoh, Masaki; Stevens, Bjorn; Bony, Sandrine; Ohno, Tomoki

    2018-03-01

    RCEMIP, an intercomparison of multiple types of models configured in radiative-convective equilibrium (RCE), is proposed. RCE is an idealization of the climate system in which there is a balance between radiative cooling of the atmosphere and heating by convection. The scientific objectives of RCEMIP are three-fold. First, clouds and climate sensitivity will be investigated in the RCE setting. This includes determining how cloud fraction changes with warming and the role of self-aggregation of convection in climate sensitivity. Second, RCEMIP will quantify the dependence of the degree of convective aggregation and tropical circulation regimes on temperature. Finally, by providing a common baseline, RCEMIP will allow the robustness of the RCE state across the spectrum of models to be assessed, which is essential for interpreting the results found regarding clouds, climate sensitivity, and aggregation, and more generally, determining which features of tropical climate a RCE framework is useful for. A novel aspect and major advantage of RCEMIP is the accessibility of the RCE framework to a variety of models, including cloud-resolving models, general circulation models, global cloud-resolving models, single-column models, and large-eddy simulation models.

  4. Equilibrium statistical mechanics of lattice models

    CERN Document Server

    Lavis, David A

    2015-01-01

    Most interesting and difficult problems in equilibrium statistical mechanics concern models which exhibit phase transitions. For graduate students and more experienced researchers this book provides an invaluable reference source of approximate and exact solutions for a comprehensive range of such models. Part I contains background material on classical thermodynamics and statistical mechanics, together with a classification and survey of lattice models. The geometry of phase transitions is described and scaling theory is used to introduce critical exponents and scaling laws. An introduction is given to finite-size scaling, conformal invariance and Schramm—Loewner evolution. Part II contains accounts of classical mean-field methods. The parallels between Landau expansions and catastrophe theory are discussed and Ginzburg—Landau theory is introduced. The extension of mean-field theory to higher-orders is explored using the Kikuchi—Hijmans—De Boer hierarchy of approximations. In Part III the use of alge...

  5. A Bayesian Approach for Sensor Optimisation in Impact Identification

    Directory of Open Access Journals (Sweden)

    Vincenzo Mallardo

    2016-11-01

    Full Text Available This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence.

  6. Optimised Renormalisation Group Flows

    CERN Document Server

    Litim, Daniel F

    2001-01-01

    Exact renormalisation group (ERG) flows interpolate between a microscopic or classical theory and the corresponding macroscopic or quantum effective theory. For most problems of physical interest, the efficiency of the ERG is constrained due to unavoidable approximations. Approximate solutions of ERG flows depend spuriously on the regularisation scheme which is determined by a regulator function. This is similar to the spurious dependence on the ultraviolet regularisation known from perturbative QCD. Providing a good control over approximated ERG flows is at the root for reliable physical predictions. We explain why the convergence of approximate solutions towards the physical theory is optimised by appropriate choices of the regulator. We study specific optimised regulators for bosonic and fermionic fields and compare the optimised ERG flows with generic ones. This is done up to second order in the derivative expansion at both vanishing and non-vanishing temperature. An optimised flow for a ``proper-time ren...

  7. Particle Swarm Optimisation with Spatial Particle Extension

    DEFF Research Database (Denmark)

    Krink, Thiemo; Vesterstrøm, Jakob Svaneborg; Riget, Jacques

    2002-01-01

    In this paper, we introduce spatial extension to particles in the PSO model in order to overcome premature convergence in iterative optimisation. The standard PSO and the new model (SEPSO) are compared w.r.t. performance on well-studied benchmark problems. We show that the SEPSO indeed managed...

  8. Natural Erosion of Sandstone as Shape Optimisation.

    Science.gov (United States)

    Ostanin, Igor; Safonov, Alexander; Oseledets, Ivan

    2017-12-11

    Natural arches, pillars and other exotic sandstone formations have always been attracting attention for their unusual shapes and amazing mechanical balance that leave a strong impression of intelligent design rather than the result of a stochastic process. It has been recently demonstrated that these shapes could have been the result of the negative feedback between stress and erosion that originates in fundamental laws of friction between the rock's constituent particles. Here we present a deeper analysis of this idea and bridge it with the approaches utilized in shape and topology optimisation. It appears that the processes of natural erosion, driven by stochastic surface forces and Mohr-Coulomb law of dry friction, can be viewed within the framework of local optimisation for minimum elastic strain energy. Our hypothesis is confirmed by numerical simulations of the erosion using the topological-shape optimisation model. Our work contributes to a better understanding of stochastic erosion and feasible landscape formations that could be found on Earth and beyond.

  9. Models of direct reactions and quantum pre-equilibrium for nucleon scattering on spherical nuclei

    International Nuclear Information System (INIS)

    Dupuis, M.

    2006-01-01

    When a nucleon collides with a target nucleus, several reactions may occur: elastic and inelastic scatterings, charge exchange... In order to describe these reactions, different models are involved: the direct reactions, pre-equilibrium and compound nucleus models. Our goal is to study, within a quantum framework and without any adjustable parameter, the direct and pre-equilibrium reactions for nucleons scatterings off double closed-shell nuclei. We first consider direct reactions: we are studying nucleon scattering with the Melbourne G-matrix, which represents the interaction between the projectile and one target nucleon, and with random phase approximation (RPA) wave functions which describe all target states. This is a fully microscopic approach since no adjustable parameters are involved. A second part is dedicated to the study of nucleon inelastic scattering for large energy transfer which necessarily involves the pre-equilibrium mechanism. Several models have been developed in the past to deal with pre-equilibrium. They start from the Born expansion of the transition amplitude which is associated to the inelastic process and they use several approximations which have not yet been tested. We have achieved some comparisons between second order cross sections which have been calculated with and without these approximations. Our results allow us to criticize some of these approximations and give several directions to improve the quantum pre-equilibrium models. (author)

  10. Computer Based Optimisation Rutines

    DEFF Research Database (Denmark)

    Dragsted, Birgitte; Olsen, Flemmming Ove

    1996-01-01

    In this paper the need for optimisation methods for the laser cutting process has been identified as three different situations. Demands on the optimisation methods for these situations are presented, and one method for each situation is suggested. The adaptation and implementation of the methods...

  11. Optimal Optimisation in Chemometrics

    NARCIS (Netherlands)

    Hageman, J.A.

    2004-01-01

    The use of global optimisation methods is not straightforward, especially for the more difficult optimisation problems. Solutions have to be found for items such as the evaluation function, representation, step function and meta-parameters, before any useful results can be obtained. This thesis aims

  12. optimisation of compressive strength of periwinkle shell aggregate

    African Journals Online (AJOL)

    user

    2017-01-01

    Jan 1, 2017 ... In this paper, a regression model is developed to predict and optimise the compressive strength of periwinkle shell aggregate concrete using Scheffe's regression theory. The results obtained from the derived regression model agreed favourably with the experimental data. The model was tested for ...

  13. Optimising resource management in neurorehabilitation.

    Science.gov (United States)

    Wood, Richard M; Griffiths, Jeff D; Williams, Janet E; Brouwers, Jakko

    2014-01-01

    To date, little research has been published regarding the effective and efficient management of resources (beds and staff) in neurorehabilitation, despite being an expensive service in limited supply. To demonstrate how mathematical modelling can be used to optimise service delivery, by way of a case study at a major 21 bed neurorehabilitation unit in the UK. An automated computer program for assigning weekly treatment sessions is developed. Queue modelling is used to construct a mathematical model of the hospital in terms of referral submissions to a waiting list, admission and treatment, and ultimately discharge. This is used to analyse the impact of hypothetical strategic decisions on a variety of performance measures and costs. The project culminates in a hybridised model of these two approaches, since a relationship is found between the number of therapy hours received each week (scheduling output) and length of stay (queuing model input). The introduction of the treatment scheduling program has substantially improved timetable quality (meaning a better and fairer service to patients) and has reduced employee time expended in its creation by approximately six hours each week (freeing up time for clinical work). The queuing model has been used to assess the effect of potential strategies, such as increasing the number of beds or employing more therapists. The use of mathematical modelling has not only optimised resources in the short term, but has allowed the optimality of longer term strategic decisions to be assessed.

  14. Real-time Modelling, Diagnostics and Optimised MPPT for Residential PV Systems

    DEFF Research Database (Denmark)

    Sera, Dezso

    responsible for yield-reduction of residential photovoltaic systems. Combining the model calculations with measurements, a method to detect changes in the panels’ series resistance based on the slope of the I − V curve in the vicinity of open-circuit conditions and scaled to Standard Test Conditions (STC......The work documented in the thesis has been focused into two main sections. The first part is centred around Maximum Power Point Tracking (MPPT) techniques for photovoltaic arrays, optimised for fast-changing environmental conditions, and is described in Chapter 2. The second part is dedicated...... to diagnostic functions as an additional tool to maximise the energy yield of photovoltaic arrays (Chapter 4). Furthermore, mathematical models of PV panels and arrays have been developed and built (detailed in Chapter 3) for testing MPPT algorithms, and for diagnostic purposes. In Chapter 2 an overview...

  15. Cost modelling in maintenance strategy optimisation for infrastructure assets with limited data

    International Nuclear Information System (INIS)

    Zhang, Wenjuan; Wang, Wenbin

    2014-01-01

    Our paper reports on the use of cost modelling in maintenance strategy optimisation for infrastructure assets. We present an original approach: the possibility of modelling even when the data and information usually required are not sufficient in quantity and quality. Our method makes use of subjective expert knowledge, and requires information gathered for only a small sample of assets to start with. Bayes linear methods are adopted to combine the subjective expert knowledge with the sample data to estimate the unknown model parameters of the cost model. When new information becomes available, Bayes linear methods also prove useful in updating these estimates. We use a case study from the rail industry to demonstrate our methods. The optimal maintenance strategy is obtained via simulation based on the estimated model parameters and the strategy with the least unit time cost is identified. When the optimal strategy is not followed due to insufficient funding, the future costs of recovering the degraded asset condition are estimated

  16. Modeling Inflation Using a Non-Equilibrium Equation of Exchange

    Science.gov (United States)

    Chamberlain, Robert G.

    2013-01-01

    Inflation is a change in the prices of goods that takes place without changes in the actual values of those goods. The Equation of Exchange, formulated clearly in a seminal paper by Irving Fisher in 1911, establishes an equilibrium relationship between the price index P (also known as "inflation"), the economy's aggregate output Q (also known as "the real gross domestic product"), the amount of money available for spending M (also known as "the money supply"), and the rate at which money is reused V (also known as "the velocity of circulation of money"). This paper offers first a qualitative discussion of what can cause these factors to change and how those causes might be controlled, then develops a quantitative model of inflation based on a non-equilibrium version of the Equation of Exchange. Causal relationships are different from equations in that the effects of changes in the causal variables take time to play out-often significant amounts of time. In the model described here, wages track prices, but only after a distributed lag. Prices change whenever the money supply, aggregate output, or the velocity of circulation of money change, but only after a distributed lag. Similarly, the money supply depends on the supplies of domestic and foreign money, which depend on the monetary base and a variety of foreign transactions, respectively. The spreading of delays mitigates the shocks of sudden changes to important inputs, but the most important aspect of this model is that delays, which often have dramatic consequences in dynamic systems, are explicitly incorporated.macroeconomics, inflation, equation of exchange, non-equilibrium, Athena Project

  17. Solid-Liquid equilibrium of n-alkanes using the Chain Delta Lattice Parameter model

    DEFF Research Database (Denmark)

    Coutinho, João A.P.; Andersen, Simon Ivar; Stenby, Erling Halfdan

    1996-01-01

    The formation of a solid phase in liquid mixtures with large paraffinic molecules is a phenomenon of interest in the petroleum, pharmaceutical, and biotechnological industries among onters. Efforts to model the solid-liquid equilibrium in these systems have been mainly empirical and with different...... degrees of success.An attempt to describe the equilibrium between the high temperature form of a paraffinic solid solution, commonly known as rotator phase, and the liquid phase is performed. The Chain Delta Lattice Parameter model (CDLP) is developed allowing a successful description of the solid-liquid...... equilibrium of n-alkanes ranging from n-C_20 to n-C_40.The model is further modified to achieve a more correct temperature dependence because it severely underestimates the excess enthalpy. It is shown that the ratio of excess enthalpy and entropy for n-alkane solid solutions, as happens for other solid...

  18. A development of multi-Species mass transport model considering thermodynamic phase equilibrium

    DEFF Research Database (Denmark)

    Hosokawa, Yoshifumi; Yamada, Kazuo; Johannesson, Björn

    2008-01-01

    ) variation in solid-phase composition when using different types of cement, (ii) physicochemical evaluation of steel corrosion initiation behaviour by calculating the molar ratio of chloride ion to hydroxide ion [Cl]/[OH] in pore solution, (iii) complicated changes of solid-phase composition caused......In this paper, a multi-species mass transport model, which can predict time dependent variation of pore solution and solid-phase composition due to the mass transport into the hardened cement paste, has been developed. Since most of the multi-species models established previously, based...... on the Poisson-Nernst-Planck theory, did not involve the modeling of chemical process, it has been coupled to thermodynamic equilibrium model in this study. By the coupling of thermodynamic equilibrium model, the multi-species model could simulate many different behaviours in hardened cement paste such as: (i...

  19. HVAC system optimisation-in-building section

    Energy Technology Data Exchange (ETDEWEB)

    Lu, L.; Cai, W.; Xie, L.; Li, S.; Soh, Y.C. [School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore (Singapore)

    2004-07-01

    This paper presents a practical method to optimise in-building section of centralised Heating, Ventilation and Air-Conditioning (HVAC) systems which consist of indoor air loops and chilled water loops. First, through component characteristic analysis, mathematical models associated with cooling loads and energy consumption for heat exchangers and energy consuming devices are established. By considering variation of cooling load of each end user, adaptive neuro-fuzzy inference system (ANFIS) is employed to model duct and pipe networks and obtain optimal differential pressure (DP) set points based on limited sensor information. A mix-integer nonlinear constraint optimization of system energy is formulated and solved by a modified genetic algorithm. The main feature of our paper is a systematic approach in optimizing the overall system energy consumption rather than that of individual component. A simulation study for a typical centralized HVAC system is provided to compare the proposed optimisation method with traditional ones. The results show that the proposed method indeed improves the system performance significantly. (author)

  20. Foundations and models of pre-equilibrium decay

    International Nuclear Information System (INIS)

    Bunakov, V.E.

    1980-01-01

    A review is given of the presently existing microscopic, semi-phenomenologic and phenomenologic models used for the description of nuclear reactions. Their advantages and drawbacks are analyzed. A special attention is given to the analysis of pre-equilibrium decay phenomenological models based on the use of master equations (time-dependent versions of exciton models, intranuclear cascade, etc.). A version of the unified theory of nuclear reactions is discussed which makes use of quantum master equations for finite open systems. The conditions are formulated for the derivation of these equations from the time-dependent Schroedinger equation for the many-body problem. The various models of nuclear reactions used in practice are shown to be approximate solutions of master equations for finite open systems. From this point of view the analysis is carried out of these models' reliability in the description of experimental data. Possible modifications are considered which provide for better agreement between the different models and for the more exact description of experimental data. (author)

  1. Time varying acceleration coefficients particle swarm optimisation (TVACPSO): A new optimisation algorithm for estimating parameters of PV cells and modules

    International Nuclear Information System (INIS)

    Jordehi, Ahmad Rezaee

    2016-01-01

    Highlights: • A modified PSO has been proposed for parameter estimation of PV cells and modules. • In the proposed modified PSO, acceleration coefficients are changed during run. • The proposed modified PSO mitigates premature convergence problem. • Parameter estimation problem has been solved for both PV cells and PV modules. • The results show that proposed PSO outperforms other state of the art algorithms. - Abstract: Estimating circuit model parameters of PV cells/modules represents a challenging problem. PV cell/module parameter estimation problem is typically translated into an optimisation problem and is solved by metaheuristic optimisation problems. Particle swarm optimisation (PSO) is considered as a popular and well-established optimisation algorithm. Despite all its advantages, PSO suffers from premature convergence problem meaning that it may get trapped in local optima. Personal and social acceleration coefficients are two control parameters that, due to their effect on explorative and exploitative capabilities, play important roles in computational behavior of PSO. In this paper, in an attempt toward premature convergence mitigation in PSO, its personal acceleration coefficient is decreased during the course of run, while its social acceleration coefficient is increased. In this way, an appropriate tradeoff between explorative and exploitative capabilities of PSO is established during the course of run and premature convergence problem is significantly mitigated. The results vividly show that in parameter estimation of PV cells and modules, the proposed time varying acceleration coefficients PSO (TVACPSO) offers more accurate parameters than conventional PSO, teaching learning-based optimisation (TLBO) algorithm, imperialistic competitive algorithm (ICA), grey wolf optimisation (GWO), water cycle algorithm (WCA), pattern search (PS) and Newton algorithm. For validation of the proposed methodology, parameter estimation has been done both for

  2. Quantity Constrained General Equilibrium

    NARCIS (Netherlands)

    Babenko, R.; Talman, A.J.J.

    2006-01-01

    In a standard general equilibrium model it is assumed that there are no price restrictions and that prices adjust infinitely fast to their equilibrium values.In case of price restrictions a general equilibrium may not exist and rationing on net demands or supplies is needed to clear the markets.In

  3. Equilibrium and off-equilibrium trap-size scaling in one-dimensional ultracold bosonic gases

    International Nuclear Information System (INIS)

    Campostrini, Massimo; Vicari, Ettore

    2010-01-01

    We study some aspects of equilibrium and off-equilibrium quantum dynamics of dilute bosonic gases in the presence of a trapping potential. We consider systems with a fixed number of particles and study their scaling behavior with increasing the trap size. We focus on one-dimensional bosonic systems, such as gases described by the Lieb-Liniger model and its Tonks-Girardeau limit of impenetrable bosons, and gases constrained in optical lattices as described by the Bose-Hubbard model. We study their quantum (zero-temperature) behavior at equilibrium and off equilibrium during the unitary time evolution arising from changes of the trapping potential, which may be instantaneous or described by a power-law time dependence, starting from the equilibrium ground state for an initial trap size. Renormalization-group scaling arguments and analytical and numerical calculations show that the trap-size dependence of the equilibrium and off-equilibrium dynamics can be cast in the form of a trap-size scaling in the low-density regime, characterized by universal power laws of the trap size, in dilute gases with repulsive contact interactions and lattice systems described by the Bose-Hubbard model. The scaling functions corresponding to several physically interesting observables are computed. Our results are of experimental relevance for systems of cold atomic gases trapped by tunable confining potentials.

  4. Modelling, analysis and optimisation of energy systems on offshore platforms

    DEFF Research Database (Denmark)

    Nguyen, Tuong-Van

    of oil and gas facilities, (ii) the means to reduce their performance losses, and (iii) the systematic design of future plants. This work builds upon a combination of modelling tools, performance evaluation methods and multi-objective optimisation routines to reproduce the behaviour of five offshore......Nowadays, the offshore production of oil and gas requires on-site processing, which includes operations such as separation, compression and purification. The offshore system undergoes variations of the petroleum production rates over the field life – it is therefore operated far from its nominal...... with the combustion, pressure-change and cooling operations, but these processes are ranked differently depending on the plant layout and on the field production stage. The most promising improvements consist of introducing a multi-level production manifold, avoiding anti-surge gas recirculation, installing a waste...

  5. Prediction of the working parameters of a wood waste gasifier through an equilibrium model

    Energy Technology Data Exchange (ETDEWEB)

    Altafini, Carlos R.; Baretto, Ronaldo M. [Caxias do Sul Univ., Dept. of Mechanical Engineering, Caxias do Sul, RS (Brazil); Wander, Paulo R. [Caxias do Sul Univ., Dept. of Mechanical Engineering, Caxias do Sul, RS (Brazil); Federal Univ. of Rio Grande do Sul State (UFRGS), Mechanical Engineering Postgraduation Program (PROMEC), RS (Brazil)

    2003-10-01

    This paper deals with the computational simulation of a wood waste (sawdust) gasifier using an equilibrium model based on minimization of the Gibbs free energy. The gasifier has been tested with Pinus Elliotis sawdust, an exotic specie largely cultivated in the South of Brazil. The biomass used in the tests presented a moisture of nearly 10% (wt% on wet basis), and the average composition results of the gas produced (without tar) are compared with the equilibrium models used. Sensitivity studies to verify the influence of the moisture sawdust content on the fuel gas composition and on its heating value were made. More complex models to reproduce with better accuracy the gasifier studied were elaborated. Although the equilibrium models do not represent the reactions that occur at relatively high temperatures ( {approx_equal} 800 deg C) very well, these models can be useful to show some tendencies on the working parameter variations of a gasifier. (Author)

  6. Revisiting EOR Projects in Indonesia through Integrated Study: EOR Screening, Predictive Model, and Optimisation

    KAUST Repository

    Hartono, A. D.; Hakiki, Farizal; Syihab, Z.; Ambia, F.; Yasutra, A.; Sutopo, S.; Efendi, M.; Sitompul, V.; Primasari, I.; Apriandi, R.

    2017-01-01

    EOR preliminary analysis is pivotal to be performed at early stage of assessment in order to elucidate EOR feasibility. This study proposes an in-depth analysis toolkit for EOR preliminary evaluation. The toolkit incorporates EOR screening, predictive, economic, risk analysis and optimisation modules. The screening module introduces algorithms which assimilates statistical and engineering notions into consideration. The United States Department of Energy (U.S. DOE) predictive models were implemented in the predictive module. The economic module is available to assess project attractiveness, while Monte Carlo Simulation is applied to quantify risk and uncertainty of the evaluated project. Optimization scenario of EOR practice can be evaluated using the optimisation module, in which stochastic methods of Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and Evolutionary Strategy (ES) were applied in the algorithms. The modules were combined into an integrated package of EOR preliminary assessment. Finally, we utilised the toolkit to evaluate several Indonesian oil fields for EOR evaluation (past projects) and feasibility (future projects). The attempt was able to update the previous consideration regarding EOR attractiveness and open new opportunity for EOR implementation in Indonesia.

  7. Revisiting EOR Projects in Indonesia through Integrated Study: EOR Screening, Predictive Model, and Optimisation

    KAUST Repository

    Hartono, A. D.

    2017-10-17

    EOR preliminary analysis is pivotal to be performed at early stage of assessment in order to elucidate EOR feasibility. This study proposes an in-depth analysis toolkit for EOR preliminary evaluation. The toolkit incorporates EOR screening, predictive, economic, risk analysis and optimisation modules. The screening module introduces algorithms which assimilates statistical and engineering notions into consideration. The United States Department of Energy (U.S. DOE) predictive models were implemented in the predictive module. The economic module is available to assess project attractiveness, while Monte Carlo Simulation is applied to quantify risk and uncertainty of the evaluated project. Optimization scenario of EOR practice can be evaluated using the optimisation module, in which stochastic methods of Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and Evolutionary Strategy (ES) were applied in the algorithms. The modules were combined into an integrated package of EOR preliminary assessment. Finally, we utilised the toolkit to evaluate several Indonesian oil fields for EOR evaluation (past projects) and feasibility (future projects). The attempt was able to update the previous consideration regarding EOR attractiveness and open new opportunity for EOR implementation in Indonesia.

  8. A numerical model for simulating electroosmotic micro- and nanochannel flows under non-Boltzmann equilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyoungjin; Kwak, Ho Sang [School of Mechanical Engineering, Kumoh National Institute of Technology, 1 Yangho, Gumi, Gyeongbuk 730-701 (Korea, Republic of); Song, Tae-Ho, E-mail: kimkj@kumoh.ac.kr, E-mail: hskwak@kumoh.ac.kr, E-mail: thsong@kaist.ac.kr [Department of Mechanical, Aerospace and Systems Engineering, Korea Advanced Institute of Science and Technology, 373-1 Guseong, Yuseong, Daejeon 305-701 (Korea, Republic of)

    2011-08-15

    This paper describes a numerical model for simulating electroosmotic flows (EOFs) under non-Boltzmann equilibrium in a micro- and nanochannel. The transport of ionic species is represented by employing the Nernst-Planck equation. Modeling issues related to numerical difficulties are discussed, which include the handling of boundary conditions based on surface charge density, the associated treatment of electric potential and the evasion of nonlinearity due to the electric body force. The EOF in the entrance region of a straight channel is examined. The numerical results show that the present model is useful for the prediction of the EOFs requiring a fine resolution of the electric double layer under either the Boltzmann equilibrium or non-equilibrium. Based on the numerical results, the correlation between the surface charge density and the zeta potential is investigated.

  9. Process modelling on a canonical basis[Process modelling; Canonical modelling

    Energy Technology Data Exchange (ETDEWEB)

    Siepmann, Volker

    2006-12-20

    Based on an equation oriented solving strategy, this thesis investigates a new approach to process modelling. Homogeneous thermodynamic state functions represent consistent mathematical models of thermodynamic properties. Such state functions of solely extensive canonical state variables are the basis of this work, as they are natural objective functions in optimisation nodes to calculate thermodynamic equilibrium regarding phase-interaction and chemical reactions. Analytical state function derivatives are utilised within the solution process as well as interpreted as physical properties. By this approach, only a limited range of imaginable process constraints are considered, namely linear balance equations of state variables. A second-order update of source contributions to these balance equations is obtained by an additional constitutive equation system. These equations are general dependent on state variables and first-order sensitivities, and cover therefore practically all potential process constraints. Symbolic computation technology efficiently provides sparsity and derivative information of active equations to avoid performance problems regarding robustness and computational effort. A benefit of detaching the constitutive equation system is that the structure of the main equation system remains unaffected by these constraints, and a priori information allows to implement an efficient solving strategy and a concise error diagnosis. A tailor-made linear algebra library handles the sparse recursive block structures efficiently. The optimisation principle for single modules of thermodynamic equilibrium is extended to host entire process models. State variables of different modules interact through balance equations, representing material flows from one module to the other. To account for reusability and encapsulation of process module details, modular process modelling is supported by a recursive module structure. The second-order solving algorithm makes it

  10. African wildlife and people : finding solutions where equilibrium models fail

    NARCIS (Netherlands)

    Poshiwa, X.

    2013-01-01

    Grazing systems, covering about half of the terrestrial surface, tend to be either equilibrial or non-equilibrial in nature, largely depending on the environmental stochasticity.The equilibrium model perspective stresses the importance of biotic feedbacks between herbivores and their resource,

  11. Optimisation of searches for Supersymmetry with the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Zvolsky, Milan

    2012-01-15

    The ATLAS experiment is one of the four large experiments at the Large Hadron Collider which is specifically designed to search for the Higgs boson and physics beyond the Standard Model. The aim of this thesis is the optimisation of searches for Supersymmetry in decays with two leptons and missing transverse energy in the final state. Two different optimisation studies have been performed for two important analysis aspects: The final signal region selection and the choice of the trigger selection. In the first part of the analysis, a cut-based optimisation of signal regions is performed, maximising the signal for a minimal background contamination. By this, the signal yield can in parts be more than doubled. The second approach is to introduce di-lepton triggers which allow to lower the lepton transverse momentum threshold, thus enhancing the number of selected signal events significantly. The signal region optimisation was considered for the choice of the final event selection in the ATLAS di-lepton analyses. The trigger study contributed to the incorporation of di-lepton triggers to the ATLAS trigger menu. (orig.)

  12. Ignition conditions relaxation for central hot-spot ignition with an ion-electron non-equilibrium model

    Science.gov (United States)

    Fan, Zhengfeng; Liu, Jie

    2016-10-01

    We present an ion-electron non-equilibrium model, in which the hot-spot ion temperature is higher than its electron temperature so that the hot-spot nuclear reactions are enhanced while energy leaks are considerably reduced. Theoretical analysis shows that the ignition region would be significantly enlarged in the hot-spot rhoR-T space as compared with the commonly used equilibrium model. Simulations show that shocks could be utilized to create and maintain non-equilibrium conditions within the hot spot, and the hot-spot rhoR requirement is remarkably reduced for achieving self-heating. In NIF high-foot implosions, it is observed that the x-ray enhancement factors are less than unity, which is not self-consistent and is caused by assuming Te =Ti. And from this non-consistency, we could infer that ion-electron non-equilibrium exists in the high-foot implosions and the ion temperature could be 9% larger than the equilibrium temperature.

  13. Multi-Optimisation Consensus Clustering

    Science.gov (United States)

    Li, Jian; Swift, Stephen; Liu, Xiaohui

    Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.

  14. Numerical solution of dynamic equilibrium models under Poisson uncertainty

    DEFF Research Database (Denmark)

    Posch, Olaf; Trimborn, Timo

    2013-01-01

    We propose a simple and powerful numerical algorithm to compute the transition process in continuous-time dynamic equilibrium models with rare events. In this paper we transform the dynamic system of stochastic differential equations into a system of functional differential equations of the retar...... solution to Lucas' endogenous growth model under Poisson uncertainty are used to compute the exact numerical error. We show how (potential) catastrophic events such as rare natural disasters substantially affect the economic decisions of households....

  15. Evolutionary programming for neutron instrument optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Bentley, Phillip M. [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany)]. E-mail: phillip.bentley@hmi.de; Pappas, Catherine [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany); Habicht, Klaus [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany); Lelievre-Berna, Eddy [Institut Laue-Langevin, 6 rue Jules Horowitz, BP 156, 38042 Grenoble Cedex 9 (France)

    2006-11-15

    Virtual instruments based on Monte-Carlo techniques are now integral part of novel instrumentation development and the existing codes (McSTAS and Vitess) are extensively used to define and optimise novel instrumental concepts. Neutron spectrometers, however, involve a large number of parameters and their optimisation is often a complex and tedious procedure. Artificial intelligence algorithms are proving increasingly useful in such situations. Here, we present an automatic, reliable and scalable numerical optimisation concept based on the canonical genetic algorithm (GA). The algorithm was used to optimise the 3D magnetic field profile of the NSE spectrometer SPAN, at the HMI. We discuss the potential of the GA which combined with the existing Monte-Carlo codes (Vitess, McSTAS, etc.) leads to a very powerful tool for automated global optimisation of a general neutron scattering instrument, avoiding local optimum configurations.

  16. Evolutionary programming for neutron instrument optimisation

    International Nuclear Information System (INIS)

    Bentley, Phillip M.; Pappas, Catherine; Habicht, Klaus; Lelievre-Berna, Eddy

    2006-01-01

    Virtual instruments based on Monte-Carlo techniques are now integral part of novel instrumentation development and the existing codes (McSTAS and Vitess) are extensively used to define and optimise novel instrumental concepts. Neutron spectrometers, however, involve a large number of parameters and their optimisation is often a complex and tedious procedure. Artificial intelligence algorithms are proving increasingly useful in such situations. Here, we present an automatic, reliable and scalable numerical optimisation concept based on the canonical genetic algorithm (GA). The algorithm was used to optimise the 3D magnetic field profile of the NSE spectrometer SPAN, at the HMI. We discuss the potential of the GA which combined with the existing Monte-Carlo codes (Vitess, McSTAS, etc.) leads to a very powerful tool for automated global optimisation of a general neutron scattering instrument, avoiding local optimum configurations

  17. Optimisation in X-ray and Molecular Imaging 2015

    International Nuclear Information System (INIS)

    Baath, Magnus; Hoeschen, Christoph; Mattsson, Soeren; Mansson, Lars Gunnar

    2016-01-01

    This issue of Radiation Protection Dosimetry is based on contributions to Optimisation in X-ray and Molecular Imaging 2015 - the 4. Malmoe Conference on Medical Imaging (OXMI 2015). The conference was jointly organised by members of former and current research projects supported by the European Commission EURATOM Radiation Protection Research Programme, in cooperation with the Swedish Society for Radiation Physics. The conference brought together over 150 researchers and other professionals from hospitals, universities and industries with interests in different aspects of the optimisation of medical imaging. More than 100 presentations were given at this international gathering of medical physicists, radiologists, engineers, technicians, nurses and educational researchers. Additionally, invited talks were offered by world-renowned experts on radiation protection, spectral imaging and medical image perception, thus covering several important aspects of the generation and interpretation of medical images. The conference consisted of 13 oral sessions and a poster session, as reflected by the conference title connected by their focus on the optimisation of the use ionising radiation in medical imaging. The conference included technology-specific topics such as computed tomography and tomosynthesis, but also generic issues of interest for the optimisation of all medical imaging, such as image perception and quality assurance. Radiation protection was covered by e.g. sessions on patient dose benchmarking and occupational exposure. Technically-advanced topics such as modelling, Monte Carlo simulation, reconstruction, classification, and segmentation were seen taking advantage of recent developments of hardware and software, showing that the optimisation community is at the forefront of technology and adapts well to new requirements. These peer-reviewed proceedings, representing a continuation of a series of selected reports from meetings in the field of medical imaging

  18. An applied general equilibrium model for Dutch agribusiness policy analysis

    NARCIS (Netherlands)

    Peerlings, J.

    1993-01-01

    The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of

  19. Topology optimised wavelength dependent splitters

    DEFF Research Database (Denmark)

    Hede, K. K.; Burgos Leon, J.; Frandsen, Lars Hagedorn

    A photonic crystal wavelength dependent splitter has been constructed by utilising topology optimisation1. The splitter has been fabricated in a silicon-on-insulator material (Fig. 1). The topology optimised wavelength dependent splitter demonstrates promising 3D FDTD simulation results....... This complex photonic crystal structure is very sensitive against small fabrication variations from the expected topology optimised design. A wavelength dependent splitter is an important basic building block for high-performance nanophotonic circuits. 1J. S. Jensen and O. Sigmund, App. Phys. Lett. 84, 2022...

  20. Immunity by equilibrium.

    Science.gov (United States)

    Eberl, Gérard

    2016-08-01

    The classical model of immunity posits that the immune system reacts to pathogens and injury and restores homeostasis. Indeed, a century of research has uncovered the means and mechanisms by which the immune system recognizes danger and regulates its own activity. However, this classical model does not fully explain complex phenomena, such as tolerance, allergy, the increased prevalence of inflammatory pathologies in industrialized nations and immunity to multiple infections. In this Essay, I propose a model of immunity that is based on equilibrium, in which the healthy immune system is always active and in a state of dynamic equilibrium between antagonistic types of response. This equilibrium is regulated both by the internal milieu and by the microbial environment. As a result, alteration of the internal milieu or microbial environment leads to immune disequilibrium, which determines tolerance, protective immunity and inflammatory pathology.

  1. Phase equilibrium modeling of gas hydrate systems for CO2 capture

    DEFF Research Database (Denmark)

    Herslund, Peter Jørgensen; Thomsen, Kaj; Abildskov, Jens

    2012-01-01

    to form from vapor phases with initial mole fractions of CO2 at or above 0.15.The two models are validated against mixed hydrate equilibrium data found in literature. Both dissociation pressures and hydrate compositions are considered in the validation process.With the fitted parameters, Model I predicts...

  2. Integration of environmental aspects in modelling and optimisation of water supply chains.

    Science.gov (United States)

    Koleva, Mariya N; Calderón, Andrés J; Zhang, Di; Styan, Craig A; Papageorgiou, Lazaros G

    2018-04-26

    Climate change becomes increasingly more relevant in the context of water systems planning. Tools are necessary to provide the most economic investment option considering the reliability of the infrastructure from technical and environmental perspectives. Accordingly, in this work, an optimisation approach, formulated as a spatially-explicit multi-period Mixed Integer Linear Programming (MILP) model, is proposed for the design of water supply chains at regional and national scales. The optimisation framework encompasses decisions such as installation of new purification plants, capacity expansion, and raw water trading schemes. The objective is to minimise the total cost incurring from capital and operating expenditures. Assessment of available resources for withdrawal is performed based on hydrological balances, governmental rules and sustainable limits. In the light of the increasing importance of reliability of water supply, a second objective, seeking to maximise the reliability of the supply chains, is introduced. The epsilon-constraint method is used as a solution procedure for the multi-objective formulation. Nash bargaining approach is applied to investigate the fair trade-offs between the two objectives and find the Pareto optimality. The models' capability is addressed through a case study based on Australia. The impact of variability in key input parameters is tackled through the implementation of a rigorous global sensitivity analysis (GSA). The findings suggest that variations in water demand can be more disruptive for the water supply chain than scenarios in which rainfalls are reduced. The frameworks can facilitate governmental multi-aspect decision making processes for the adequate and strategic investments of regional water supply infrastructure. Copyright © 2018. Published by Elsevier B.V.

  3. An experiment on radioactive equilibrium and its modelling using the ‘radioactive dice’ approach

    Science.gov (United States)

    Santostasi, Davide; Malgieri, Massimiliano; Montagna, Paolo; Vitulo, Paolo

    2017-07-01

    In this article we describe an educational activity on radioactive equilibrium we performed with secondary school students (17-18 years old) in the context of a vocational guidance stage for talented students at the Department of Physics of the University of Pavia. Radioactive equilibrium is investigated experimentally by having students measure the activity of 214Bi from two different samples, obtained using different preparation procedures from an uraniferous rock. Students are guided in understanding the mathematical structure of radioactive equilibrium through a modelling activity in two parts. Before the lab measurements, a dice game, which extends the traditional ‘radioactive dice’ activity to the case of a chain of two decaying nuclides, is performed by students divided into small groups. At the end of the laboratory work, students design and run a simple spreadsheet simulation modelling the same basic radioactive chain with user defined decay constants. By setting the constants to realistic values corresponding to nuclides of the uranium decay chain, students can deepen their understanding of the meaning of the experimental data, and also explore the difference between cases of non-equilibrium, transient and secular equilibrium.

  4. Measuring productivity differences in equilibrium search models

    DEFF Research Database (Denmark)

    Lanot, Gauthier; Neumann, George R.

    1996-01-01

    Equilibrium search models require unobserved heterogeneity in productivity to fit observed wage distribution data, but provide no guidance about the location parameter of the heterogeneity. In this paper we show that the location of the productivity heterogeneity implies a mode in a kernel density...... estimate of the wage distribution. The number of such modes and their location are identified using bump hunting techniques due to Silverman (1981). These techniques are applied to Danish panel data on workers and firms. These estimates are used to assess the importance of employer wage policy....

  5. Assessment of thermodynamic models for the design, analysis and optimisation of gas liquefaction systems

    International Nuclear Information System (INIS)

    Nguyen, Tuong-Van; Elmegaard, Brian

    2016-01-01

    Highlights: • Six thermodynamic models used for evaluating gas liquefaction systems are compared. • Three gas liquefaction systems are modelled, assessed and optimised for each equation of state. • The predictions of thermophysical properties and energy flows are significantly different. • The GERG-2008 model is the only consistent one, while cubic, virial and statistical equations are unsatisfying. - Abstract: Natural gas liquefaction systems are based on refrigeration cycles – they consist of the same operations such as heat exchange, compression and expansion, but they have different layouts, components and working fluids. The design of these systems requires a preliminary simulation and evaluation of their performance. However, the thermodynamic models used for this purpose are characterised by different mathematical formulations, ranges of application and levels of accuracy. This may lead to inconsistent results when estimating hydrocarbon properties and assessing the efficiency of a given process. This paper presents a thorough comparison of six equations of state widely used in the academia and industry, including the GERG-2008 model, which has recently been adopted as an ISO standard for natural gases. These models are used to (i) estimate the thermophysical properties of a Danish natural gas, (ii) simulate, and (iii) optimise liquefaction systems. Three case studies are considered: a cascade layout with three pure refrigerants, a single mixed-refrigerant unit, and an expander-based configuration. Significant deviations are found between all property models, and in all case studies. The main discrepancies are related to the prediction of the energy flows (up to 7%) and to the heat exchanger conductances (up to 11%), and they are not systematic errors. The results illustrate the superiority of using the GERG-2008 model for designing gas processes in real applications, with the aim of reducing their energy use. They demonstrate as well that

  6. Overshoot in biological systems modelled by Markov chains: a non-equilibrium dynamic phenomenon.

    Science.gov (United States)

    Jia, Chen; Qian, Minping; Jiang, Daquan

    2014-08-01

    A number of biological systems can be modelled by Markov chains. Recently, there has been an increasing concern about when biological systems modelled by Markov chains will perform a dynamic phenomenon called overshoot. In this study, the authors found that the steady-state behaviour of the system will have a great effect on the occurrence of overshoot. They showed that overshoot in general cannot occur in systems that will finally approach an equilibrium steady state. They further classified overshoot into two types, named as simple overshoot and oscillating overshoot. They showed that except for extreme cases, oscillating overshoot will occur if the system is far from equilibrium. All these results clearly show that overshoot is a non-equilibrium dynamic phenomenon with energy consumption. In addition, the main result in this study is validated with real experimental data.

  7. Isospin equilibrium and non-equilibrium in heavy-ion collisions at intermediate energies

    International Nuclear Information System (INIS)

    Chen Liewen; Ge Lingxiao; Zhang Xiaodong; Zhang Fengshou

    1997-01-01

    The equilibrium and non-equilibrium of the isospin degree of freedom are studied in terms of an isospin-dependent QMD model, which includes isospin-dependent symmetry energy, Coulomb energy, N-N cross sections and Pauli blocking. It is shown that there exists a transition from the isospin equilibrium to non-equilibrium as the incident energy from below to above a threshold energy in central, asymmetric heavy-ion collisions. Meanwhile, it is found that the phenomenon results from the co-existence and competition of different reaction mechanisms, namely, the isospin degree of freedom reaches an equilibrium if the incomplete fusion (ICF) component is dominant and does not reach equilibrium if the fragmentation component is dominant. Moreover, it is also found that the isospin-dependent N-N cross sections and symmetry energy are crucial for the equilibrium of the isospin degree of freedom in heavy-ion collisions around the Fermi energy. (author)

  8. A Tightly Coupled Non-Equilibrium Magneto-Hydrodynamic Model for Inductively Coupled RF Plasmas

    Science.gov (United States)

    2016-02-29

    development a tightly coupled magneto-hydrodynamic model for Inductively Coupled Radio- Frequency (RF) Plasmas. Non Local Thermodynamic Equilibrium (NLTE...for Inductively Coupled Radio-Frequency (RF) Plasmas. Non Local Thermodynamic Equilibrium (NLTE) effects are described based on a hybrid State-to-State...Inductively Coupled Plasma (ICP) torches have wide range of possible applications which include deposition of metal coatings, synthesis of ultra-fine powders

  9. Phase equilibrium engineering

    CERN Document Server

    Brignole, Esteban Alberto

    2013-01-01

    Traditionally, the teaching of phase equilibria emphasizes the relationships between the thermodynamic variables of each phase in equilibrium rather than its engineering applications. This book changes the focus from the use of thermodynamics relationships to compute phase equilibria to the design and control of the phase conditions that a process needs. Phase Equilibrium Engineering presents a systematic study and application of phase equilibrium tools to the development of chemical processes. The thermodynamic modeling of mixtures for process development, synthesis, simulation, design and

  10. Biomass supply chain optimisation for Organosolv-based biorefineries.

    Science.gov (United States)

    Giarola, Sara; Patel, Mayank; Shah, Nilay

    2014-05-01

    This work aims at providing a Mixed Integer Linear Programming modelling framework to help define planning strategies for the development of sustainable biorefineries. The up-scaling of an Organosolv biorefinery was addressed via optimisation of the whole system economics. Three real world case studies were addressed to show the high-level flexibility and wide applicability of the tool to model different biomass typologies (i.e. forest fellings, cereal residues and energy crops) and supply strategies. Model outcomes have revealed how supply chain optimisation techniques could help shed light on the development of sustainable biorefineries. Feedstock quality, quantity, temporal and geographical availability are crucial to determine biorefinery location and the cost-efficient way to supply the feedstock to the plant. Storage costs are relevant for biorefineries based on cereal stubble, while wood supply chains present dominant pretreatment operations costs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Optimising Magnetostatic Assemblies

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Smith, Anders

    theorem. This theorem formulates an energy equivalence principle with several implications concerning the optimisation of objective functionals that are linear with respect to the magnetic field. Linear functionals represent different optimisation goals, e.g. maximising a certain component of the field...... approached employing a heuristic algorithm, which led to new design concepts. Some of the procedures developed for linear objective functionals have been extended to non-linear objectives, by employing iterative techniques. Even though most the optimality results discussed in this work have been derived...

  12. Optimised performance of industrial high resolution computerised tomography

    International Nuclear Information System (INIS)

    Maangaard, M.

    2000-01-01

    The purpose of non-destructive evaluation (NDE) is to acquire knowledge of the investigated sample. Digital x-ray imaging techniques such as radiography or computerised tomography (CT) produce images of the interior of a sample. The obtained image quality determines the possibility of detecting sample related features, e.g. details and flaws. This thesis presents a method of optimising the performance of industrial X-ray equipment for the imaging task at issue in order to obtain images with high quality. CT produces maps of the X-ray linear attenuation of the sample's interior. CT can produce two dimensional cross-section images or three-dimensional images with volumetric information on the investigated sample. The image contrast and noise depend on both the investigated sample and the equipment and settings used (X-ray tube potential, X-ray filtration, exposure time, etc.). Hence, it is vital to find the optimal equipment settings in order to obtain images of high quality. To be able to mathematically optimise the image quality, it is necessary to have a model of the X-ray imaging system together with an appropriate measure of image quality. The optimisation is performed with a developed model for an X-ray image-intensifier-based radiography system. The model predicts the mean value and variance of the measured signal level in the collected radiographic images. The traditionally used measure of physical image quality is the signal-to-noise ratio (SNR). To calculate the signal-to-noise ratio, a well-defined detail (flaw) is required. It was found that maximising the SNR leads to ambiguities, the optimised settings found by maximising the SNR were dependent on the material in the detail. When CT is performed on irregular shaped samples containing density and compositional variations, it is difficult to define which SNR to use for optimisation. This difficulty is solved by the measures of physical image quality proposed here, the ratios geometry

  13. Estuarine Facies Model Revisited: Conceptual Model of Estuarine Sediment Dynamics During Non-Equilibrium Conditions

    Science.gov (United States)

    Elliott, E. A.; Rodriguez, A. B.; McKee, B. A.

    2017-12-01

    Traditional models of estuarine systems show deposition occurs primarily within the central basin. There, accommodation space is high within the deep central valley, which is below regional wave base and where current energy is presumed to reach a relative minimum, promoting direct deposition of cohesive sediment and minimizing erosion. However, these models often reflect long-term (decadal-millennial) timescales, where accumulation rates are in relative equilibrium with the rate of relative sea-level rise, and lack the resolution to capture shorter term changes in sediment deposition and erosion within the central estuary. This work presents a conceptual model for estuarine sedimentation during non-equilibrium conditions, where high-energy inputs to the system reach a relative maximum in the central basin, resulting in temporary deposition and/or remobilization over sub-annual to annual timescales. As an example, we present a case study of Core Sound, NC, a lagoonal estuarine system where the regional base-level has been reached, and sediment deposition, resuspension and bypassing is largely a result of non-equilibrium, high-energy events. Utilizing a 465 cm-long sediment core from a mini-basin located between Core Sound and the continental shelf, a 40-year sub-annual chronology was developed for the system, with sediment accumulation rates (SAR) interpolated to a monthly basis over the 40-year record. This study links erosional processes in the estuary directly with sediment flux to the continental shelf, taking advantage of the highly efficient sediment trapping capability of the mini-basin. The SAR record indicates high variation in the estuarine sediment supply, with peaks in the SAR record at a recurrence interval of 1 year (+/- 0.25). This record has been compared to historical storm influence for the area. Through this multi-decadal record, sediment flushing events occur at a much more frequent interval than previously thought (i.e. annual rather than

  14. Optimised dipper fine tunes shovel performance

    Energy Technology Data Exchange (ETDEWEB)

    Fiscor, S.

    2005-06-01

    Joint efforts between mine operators, OEMs, and researchers yields unexpected benefits from dippers for shovels for coal, oil, or hardrock mining that can now be tailored to meet site-specific conditions. The article outlines a process being developed by CRCMining and P & H MIning Equipment to optimise the dipper that involves rapid prototyping and scale modelling of the dipper and the mine conditions. Scale models have been successfully field tested. 2 photos.

  15. A model on CME/Flare initiation: Loss of Equilibrium caused by mass loss of quiescent prominences

    Science.gov (United States)

    Miley, George; Chon Nam, Sok; Kim, Mun Song; Kim, Jik Su

    2015-08-01

    Coronal Mass Ejections (CMEs) model should give an answer to enough energy storage for giant bulk plasma into interplanetary space to escape against the sun’s gravitation and its explosive eruption. Advocates of ‘Mass Loading’ model (e.g. Low, B. 1996, SP, 167, 217) suggested a simple mechanism of CME initiation, the loss of mass from a prominence anchoring magnetic flux rope, but they did not associate the mass loss with the loss of equilibrium. The catastrophic loss of equilibrium model is considered as to be a prospective CME/Flare model to explain sudden eruption of magnetic flux systems. Isenberg, P. A., et al (1993, ApJ, 417, 368)developed ideal magnetohydrodynamic theory of the magnetic flux rope to show occurrence of catastrophic loss of equilibrium according to increasing magnetic flux transported into corona.We begin with extending their study including gravity on prominence’s material to obtain equilibrium curves in case of given mass parameters, which are the strengths of the gravitational force compared with the characteristic magnetic force. Furthermore, we study quasi-static evolution of the system including massive prominence flux rope and current sheet below it to obtain equilibrium curves of prominence’s height according to decreasing mass parameter in a properly fixed magnetic environment. The curves show equilibrium loss behaviors to imply that mass loss result in equilibrium loss. Released fractions of magnetic energy are greater than corresponding zero-mass case. This eruption mechanism is expected to be able to apply to the eruptions of quiescent prominences, which is located in relatively weak magnetic environment with 105 km of scale length and 10G of photospheric magnetic field.

  16. MANAGEMENT OPTIMISATION OF MASS CUSTOMISATION MANUFACTURING USING COMPUTATIONAL INTELLIGENCE

    Directory of Open Access Journals (Sweden)

    Louwrens Butler

    2018-05-01

    Full Text Available Computational intelligence paradigms can be used for advanced manufacturing system optimisation. A static simulation model of an advanced manufacturing system was developed in order to simulate a manufacturing system. The purpose of this advanced manufacturing system was to mass-produce a customisable product range at a competitive cost. The aim of this study was to determine whether this new algorithm could produce a better performance than traditional optimisation methods. The algorithm produced a lower cost plan than that for a simulated annealing algorithm, and had a lower impact on the workforce.

  17. Investigating the Trade-Off Between Power Generation and Environmental Impact of Tidal-Turbine Arrays Using Array Layout Optimisation and Habitat Sustainability Modelling.

    Science.gov (United States)

    du Feu, R. J.; Funke, S. W.; Kramer, S. C.; Hill, J.; Piggott, M. D.

    2016-12-01

    The installation of tidal turbines into the ocean will inevitably affect the environment around them. However, due to the relative infancy of this sector the extent and severity of such effects is unknown. The layout of an array of turbines is an important factor in determining not only the array's final yield but also how it will influence regional hydrodynamics. This in turn could affect, for example, sediment transportation or habitat suitability. The two potentially competing objectives of extracting energy from the tidal current, and of limiting any environmental impact consequent to influencing that current, are investigated here. This relationship is posed as a multi-objective optimisation problem. OpenTidalFarm, an array layout optimisation tool, and MaxEnt, habitat sustainability modelling software, are used to evaluate scenarios off the coast of the UK. MaxEnt is used to estimate the likelihood of finding a species in a given location based upon environmental input data and presence data of the species. Environmental features which are known to impact habitat, specifically those affected by the presence of an array, such as bed shear stress, are chosen as inputs. MaxEnt then uses a maximum-entropy modelling approach to estimate population distribution across the modelled area. OpenTidalFarm is used to maximise the power generated by an array, or multiple arrays, through adjusting the position and number of turbines within them. It uses a 2D shallow water model with turbine arrays represented as adjustable friction fields. It has the capability to also optimise for user created functionals that can be expressed mathematically. This work uses two functionals; power extracted by the array, and the suitability of habitat as predicted by MaxEnt. A gradient-based local optimisation is used to adjust the array layout at each iteration. This work presents arrays that are optimised for both yield and the viability of habitat for chosen species. In each scenario

  18. A Hybrid Method for the Modelling and Optimisation of Constrained Search Problems

    Directory of Open Access Journals (Sweden)

    Sitek Pawel

    2014-08-01

    Full Text Available The paper presents a concept and the outline of the implementation of a hybrid approach to modelling and solving constrained problems. Two environments of mathematical programming (in particular, integer programming and declarative programming (in particular, constraint logic programming were integrated. The strengths of integer programming and constraint logic programming, in which constraints are treated in a different way and different methods are implemented, were combined to use the strengths of both. The hybrid method is not worse than either of its components used independently. The proposed approach is particularly important for the decision models with an objective function and many discrete decision variables added up in multiple constraints. To validate the proposed approach, two illustrative examples are presented and solved. The first example is the authors’ original model of cost optimisation in the supply chain with multimodal transportation. The second one is the two-echelon variant of the well-known capacitated vehicle routing problem.

  19. Non-Equilibrium Modeling of Inductively Coupled RF Plasmas

    Science.gov (United States)

    2015-01-01

    wall can be approximated with the expression for an infinite solenoid , B(r = R) = µ0NIc, where quan- tities N and Ic are the number of turns per unit...Modeling of non-equilibrium plasmas in an induc- tively coupled plasma facility. AIAA Paper 2014– 2235, 2014. 45th AIAA Plasmadynamics and Lasers ...1993. 24th Plas- madynamics and Laser Conference, Orlando, FL. [22] M. Capitelli, I. Armenise, D. Bruno, M. Caccia- tore, R. Celiberto, G. Colonna, O

  20. Thickness Optimisation of Textiles Subjected to Heat and Mass Transport during Ironing

    Directory of Open Access Journals (Sweden)

    Korycki Ryszard

    2016-09-01

    Full Text Available Let us next analyse the coupled problem during ironing of textiles, that is, the heat is transported with mass whereas the mass transport with heat is negligible. It is necessary to define both physical and mathematical models. Introducing two-phase system of mass sorption by fibres, the transport equations are introduced and accompanied by the set of boundary and initial conditions. Optimisation of material thickness during ironing is gradient oriented. The first-order sensitivity of an arbitrary objective functional is analysed and included in optimisation procedure. Numerical example is the thickness optimisation of different textile materials in ironing device.

  1. Assessing and optimizing the economic and environmental impacts of cogeneration/district energy systems using an energy equilibrium model

    International Nuclear Information System (INIS)

    Wu, Y.J.; Rosen, M.A.

    1999-01-01

    Energy equilibrium models can be valuable aids in energy planning and decision-making. In such models, supply is represented by a cost-minimizing linear submodel and demand by a smooth vector-valued function of prices. In this paper, we use the energy equilibrium model to study conventional systems and cogeneration-based district energy (DE) systems for providing heating, cooling and electrical services, not only to assess the potential economic and environmental benefits of cogeneration-based DE systems, but also to develop optimal configurations while accounting for such factors as economics and environmental impact. The energy equilibrium model is formulated and solved with software called WATEMS, which uses sequential non-linear programming to calculate the intertemporal equilibrium of energy supplies and demands. The methods of analysis and evaluation for the economic and environmental impacts are carefully explored. An illustrative energy equilibrium model of conventional and cogeneration-based DE systems is developed within WATEMS to compare quantitatively the economic and environmental impacts of those systems for various scenarios. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  2. A new equilibrium trading model with asymmetric information

    Directory of Open Access Journals (Sweden)

    Lianzhang Bao

    2018-03-01

    Full Text Available Taking arbitrage opportunities into consideration in an incomplete market, dealers will pricebonds based on asymmetric information. The dealer with the best offering price wins the bid. The riskpremium in dealer’s offering price is primarily determined by the dealer’s add-on rate of change tothe term structure. To optimize the trading strategy, a new equilibrium trading model is introduced.Optimal sequential estimation scheme for detecting the risk premium due to private inforamtion isproposed based on historical prices, and the best bond pricing formula is given with the accordingoptimal trading strategy. Numerical examples are provided to illustrate the economic insights underthe certain stochastic term structure interest rate models.

  3. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    Science.gov (United States)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  4. Review and analysis of biomass gasification models

    DEFF Research Database (Denmark)

    Puig Arnavat, Maria; Bruno, Joan Carles; Coronas, Alberto

    2010-01-01

    , and the design, simulation, optimisation and process analysis of gasifiers have been carried out. This paper presents and analyses several gasification models based on thermodynamic equilibrium, kinetics and artificial neural networks. The thermodynamic models are found to be a useful tool for preliminary...... comparison and for process studies on the influence of the most important fuel and process parameters. They have the advantage of being independent of gasifier design, but they cannot give highly accurate results for all cases. The kinetic-based models are computationally more intensive but give accurate...

  5. Structural-electrical coupling optimisation for radiating and scattering performances of active phased array antenna

    Science.gov (United States)

    Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng

    2018-04-01

    It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.

  6. Recent tests of the equilibrium-point hypothesis (lambda model).

    Science.gov (United States)

    Feldman, A G; Ostry, D J; Levin, M F; Gribble, P L; Mitnitski, A B

    1998-07-01

    The lambda model of the equilibrium-point hypothesis (Feldman & Levin, 1995) is an approach to motor control which, like physics, is based on a logical system coordinating empirical data. The model has gone through an interesting period. On one hand, several nontrivial predictions of the model have been successfully verified in recent studies. In addition, the explanatory and predictive capacity of the model has been enhanced by its extension to multimuscle and multijoint systems. On the other hand, claims have recently appeared suggesting that the model should be abandoned. The present paper focuses on these claims and concludes that they are unfounded. Much of the experimental data that have been used to reject the model are actually consistent with it.

  7. A general equilibrium model of ecosystem services in a river basin

    Science.gov (United States)

    Travis Warziniack

    2014-01-01

    This study builds a general equilibrium model of ecosystem services, with sectors of the economy competing for use of the environment. The model recognizes that production processes in the real world require a combination of natural and human inputs, and understanding the value of these inputs and their competing uses is necessary when considering policies of resource...

  8. Risk-informed optimisation of railway tracks inspection and maintenance procedures

    International Nuclear Information System (INIS)

    Podofillini, Luca; Zio, Enrico; Vatn, Jorn

    2006-01-01

    Nowadays, efforts are being made by the railway industry for the application of reliability-based and risk-informed approaches to maintenance optimisation of railway infrastructures, with the aim of reducing the operation and maintenance expenditures while still assuring high safety standards. In particular, in this paper, we address the use of ultrasonic inspection cars and develop a methodology for the determination of an optimal strategy for their use. A model is developed to calculate the risks and costs associated with an inspection strategy, giving credit to the realistic issues of the rail failure process and including the actual inspection and maintenance procedures followed by the railway company. A multi-objective optimisation viewpoint is adopted in an effort to optimise inspection and maintenance procedures with respect to both economical and safety-related aspects. More precisely, the objective functions here considered are such to drive the search towards solutions characterized by low expenditures and low derailment probability. The optimisation is performed by means of a genetic algorithm. The work has been carried out within a study of the Norwegian National Rail Administration (Jernbaneverket)

  9. Robustness analysis of bogie suspension components Pareto optimised values

    Science.gov (United States)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  10. Production optimisation in the petrochemical industry by hierarchical multivariate modelling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Magnus; Furusjoe, Erik; Jansson, Aasa

    2004-06-01

    This project demonstrates the advantages of applying hierarchical multivariate modelling in the petrochemical industry in order to increase knowledge of the total process. The models indicate possible ways to optimise the process regarding the use of energy and raw material, which is directly linked to the environmental impact of the process. The refinery of Nynaes Refining AB (Goeteborg, Sweden) has acted as a demonstration site in this project. The models developed for the demonstration site resulted in: Detection of an unknown process disturbance and suggestions of possible causes; Indications on how to increase the yield in combination with energy savings; The possibility to predict product quality from on-line process measurements, making the results available at a higher frequency than customary laboratory analysis; Quantification of the gradually lowered efficiency of heat transfer in the furnace and increased fuel consumption as an effect of soot build-up on the furnace coils; Increased knowledge of the relation between production rate and the efficiency of the heat exchangers. This report is one of two reports from the project. It contains a technical discussion of the result with some degree of detail. A shorter and more easily accessible report is also available, see IVL report B1586-A.

  11. On a unified presentation of the non-equilibrium two-phase flow models

    International Nuclear Information System (INIS)

    Boure, J.A.

    1975-01-01

    If the various existing one-dimensional two-phase flow models are consistent, they must appear as particular cases of more general models. It is shown that such is the case if, and only if, the mathematical form of the laws of the transfers between the phases is sufficiently general. These transfer laws control the non-equilibrium phenomena. A convenient general model is a particular form of the two-fluid model. This particular form involves three equations and three dependent variables characterizing the mixture, and three equations and three dependent variables characterizing the differences between the phases (slip, thermal non-equilibriums). The mathematical expressions of the transfert terms present in the above equations involve first-order partial derivatives of the dependent variables. The other existing models may be deduced from the general model by making assumptions on the fluid evolution. Several examples are given. The resulting unified presentation of the existing model enables a comparison of the implicit assumptions made in these models on the transfer laws. It is therefore, a useful tool for the appraisal of the existing models and for the development of new models [fr

  12. Fluctuation-dissipation relation and stationary distribution of an exactly solvable many-particle model for active biomatter far from equilibrium.

    Science.gov (United States)

    Netz, Roland R

    2018-05-14

    An exactly solvable, Hamiltonian-based model of many massive particles that are coupled by harmonic potentials and driven by stochastic non-equilibrium forces is introduced. The stationary distribution and the fluctuation-dissipation relation are derived in closed form for the general non-equilibrium case. Deviations from equilibrium are on one hand characterized by the difference of the obtained stationary distribution from the Boltzmann distribution; this is possible because the model derives from a particle Hamiltonian. On the other hand, the difference between the obtained non-equilibrium fluctuation-dissipation relation and the standard equilibrium fluctuation-dissipation theorem allows us to quantify non-equilibrium in an alternative fashion. Both indicators of non-equilibrium behavior, i.e., deviations from the Boltzmann distribution and deviations from the equilibrium fluctuation-dissipation theorem, can be expressed in terms of a single non-equilibrium parameter α that involves the ratio of friction coefficients and random force strengths. The concept of a non-equilibrium effective temperature, which can be defined by the relation between fluctuations and the dissipation, is by comparison with the exactly derived stationary distribution shown not to hold, even if the effective temperature is made frequency dependent. The analysis is not confined to close-to-equilibrium situations but rather is exact and thus holds for arbitrarily large deviations from equilibrium. Also, the suggested harmonic model can be obtained from non-linear mechanical network systems by an expansion in terms of suitably chosen deviatory coordinates; the obtained results should thus be quite general. This is demonstrated by comparison of the derived non-equilibrium fluctuation dissipation relation with experimental data on actin networks that are driven out of equilibrium by energy-consuming protein motors. The comparison is excellent and allows us to extract the non-equilibrium

  13. Fluctuation-dissipation relation and stationary distribution of an exactly solvable many-particle model for active biomatter far from equilibrium

    Science.gov (United States)

    Netz, Roland R.

    2018-05-01

    An exactly solvable, Hamiltonian-based model of many massive particles that are coupled by harmonic potentials and driven by stochastic non-equilibrium forces is introduced. The stationary distribution and the fluctuation-dissipation relation are derived in closed form for the general non-equilibrium case. Deviations from equilibrium are on one hand characterized by the difference of the obtained stationary distribution from the Boltzmann distribution; this is possible because the model derives from a particle Hamiltonian. On the other hand, the difference between the obtained non-equilibrium fluctuation-dissipation relation and the standard equilibrium fluctuation-dissipation theorem allows us to quantify non-equilibrium in an alternative fashion. Both indicators of non-equilibrium behavior, i.e., deviations from the Boltzmann distribution and deviations from the equilibrium fluctuation-dissipation theorem, can be expressed in terms of a single non-equilibrium parameter α that involves the ratio of friction coefficients and random force strengths. The concept of a non-equilibrium effective temperature, which can be defined by the relation between fluctuations and the dissipation, is by comparison with the exactly derived stationary distribution shown not to hold, even if the effective temperature is made frequency dependent. The analysis is not confined to close-to-equilibrium situations but rather is exact and thus holds for arbitrarily large deviations from equilibrium. Also, the suggested harmonic model can be obtained from non-linear mechanical network systems by an expansion in terms of suitably chosen deviatory coordinates; the obtained results should thus be quite general. This is demonstrated by comparison of the derived non-equilibrium fluctuation dissipation relation with experimental data on actin networks that are driven out of equilibrium by energy-consuming protein motors. The comparison is excellent and allows us to extract the non-equilibrium

  14. OPTIMISATION OF A DRIVE SYSTEM AND ITS EPICYCLIC GEAR SET

    OpenAIRE

    Bellegarde , Nicolas; Dessante , Philippe; Vidal , Pierre; Vannier , Jean-Claude

    2007-01-01

    International audience; This paper describes the design of a drive consisting of a DC motor, a speed reducer, a lead screw transformation system, a power converter and its associated DC source. The objective is to reduce the mass of the system. Indeed, the volume and weight optimisation of an electrical drive is an important issue for embedded applications. Here, we present an analytical model of the system in a specific application and afterwards an optimisation of the motor and speed reduce...

  15. Layout Optimisation of Wave Energy Converter Arrays

    DEFF Research Database (Denmark)

    Ruiz, Pau Mercadé; Nava, Vincenzo; Topper, Mathew B. R.

    2017-01-01

    This paper proposes an optimisation strategy for the layout design of wave energy converter (WEC) arrays. Optimal layouts are sought so as to maximise the absorbed power given a minimum q-factor, the minimum distance between WECs, and an area of deployment. To guarantee an efficient optimisation......, a four-parameter layout description is proposed. Three different optimisation algorithms are further compared in terms of performance and computational cost. These are the covariance matrix adaptation evolution strategy (CMA), a genetic algorithm (GA) and the glowworm swarm optimisation (GSO) algorithm...

  16. Optimisation of the formulation of a bubble bath by a chemometric approach market segmentation and optimisation.

    Science.gov (United States)

    Marengo, Emilio; Robotti, Elisa; Gennaro, Maria Carla; Bertetto, Mariella

    2003-03-01

    The optimisation of the formulation of a commercial bubble bath was performed by chemometric analysis of Panel Tests results. A first Panel Test was performed to choose the best essence, among four proposed to the consumers; the best essence chosen was used in the revised commercial bubble bath. Afterwards, the effect of changing the amount of four components (the amount of primary surfactant, the essence, the hydratant and the colouring agent) of the bubble bath was studied by a fractional factorial design. The segmentation of the bubble bath market was performed by a second Panel Test, in which the consumers were requested to evaluate the samples coming from the experimental design. The results were then treated by Principal Component Analysis. The market had two segments: people preferring a product with a rich formulation and people preferring a poor product. The final target, i.e. the optimisation of the formulation for each segment, was obtained by the calculation of regression models relating the subjective evaluations given by the Panel and the compositions of the samples. The regression models allowed to identify the best formulations for the two segments ofthe market.

  17. Multi-period equilibrium/near-equilibrium in electricity markets based on locational marginal prices

    Science.gov (United States)

    Garcia Bertrand, Raquel

    In this dissertation we propose an equilibrium procedure that coordinates the point of view of every market agent resulting in an equilibrium that simultaneously maximizes the independent objective of every market agent and satisfies network constraints. Therefore, the activities of the generating companies, consumers and an independent system operator are modeled: (1) The generating companies seek to maximize profits by specifying hourly step functions of productions and minimum selling prices, and bounds on productions. (2) The goals of the consumers are to maximize their economic utilities by specifying hourly step functions of demands and maximum buying prices, and bounds on demands. (3) The independent system operator then clears the market taking into account consistency conditions as well as capacity and line losses so as to achieve maximum social welfare. Then, we approach this equilibrium problem using complementarity theory in order to have the capability of imposing constraints on dual variables, i.e., on prices, such as minimum profit conditions for the generating units or maximum cost conditions for the consumers. In this way, given the form of the individual optimization problems, the Karush-Kuhn-Tucker conditions for the generating companies, the consumers and the independent system operator are both necessary and sufficient. The simultaneous solution to all these conditions constitutes a mixed linear complementarity problem. We include minimum profit constraints imposed by the units in the market equilibrium model. These constraints are added as additional constraints to the equivalent quadratic programming problem of the mixed linear complementarity problem previously described. For the sake of clarity, the proposed equilibrium or near-equilibrium is first developed for the particular case considering only one time period. Afterwards, we consider an equilibrium or near-equilibrium applied to a multi-period framework. This model embodies binary

  18. An Equilibrium Chance-Constrained Multiobjective Programming Model with Birandom Parameters and Its Application to Inventory Problem

    Directory of Open Access Journals (Sweden)

    Zhimiao Tao

    2013-01-01

    Full Text Available An equilibrium chance-constrained multiobjective programming model with birandom parameters is proposed. A type of linear model is converted into its crisp equivalent model. Then a birandom simulation technique is developed to tackle the general birandom objective functions and birandom constraints. By embedding the birandom simulation technique, a modified genetic algorithm is designed to solve the equilibrium chance-constrained multiobjective programming model. We apply the proposed model and algorithm to a real-world inventory problem and show the effectiveness of the model and the solution method.

  19. Modeling and Control of an Ornithopter for Non-Equilibrium Maneuvers

    OpenAIRE

    Rose, Cameron Jarrel

    2015-01-01

    Flapping-winged flight is very complex, and it is difficult to efficiently model the unsteady airflow and nonlinear dynamics for online control. While steady state flight is well understood, transitions between flight regimes are not readily modeled or controlled. Maneuverability in non-equilibrium flight, which birds and insects readily exhibit in nature, is necessary to operate in the types of cluttered environments that small-scale flapping-winged robots are best suited for. The advantages...

  20. Incorporation of the equilibrium temperature approach in a Soil and Water Assessment Tool hydroclimatological stream temperature model

    Science.gov (United States)

    Du, Xinzhong; Shrestha, Narayan Kumar; Ficklin, Darren L.; Wang, Junye

    2018-04-01

    Stream temperature is an important indicator for biodiversity and sustainability in aquatic ecosystems. The stream temperature model currently in the Soil and Water Assessment Tool (SWAT) only considers the impact of air temperature on stream temperature, while the hydroclimatological stream temperature model developed within the SWAT model considers hydrology and the impact of air temperature in simulating the water-air heat transfer process. In this study, we modified the hydroclimatological model by including the equilibrium temperature approach to model heat transfer processes at the water-air interface, which reflects the influences of air temperature, solar radiation, wind speed and streamflow conditions on the heat transfer process. The thermal capacity of the streamflow is modeled by the variation of the stream water depth. An advantage of this equilibrium temperature model is the simple parameterization, with only two parameters added to model the heat transfer processes. The equilibrium temperature model proposed in this study is applied and tested in the Athabasca River basin (ARB) in Alberta, Canada. The model is calibrated and validated at five stations throughout different parts of the ARB, where close to monthly samplings of stream temperatures are available. The results indicate that the equilibrium temperature model proposed in this study provided better and more consistent performances for the different regions of the ARB with the values of the Nash-Sutcliffe Efficiency coefficient (NSE) greater than those of the original SWAT model and the hydroclimatological model. To test the model performance for different hydrological and environmental conditions, the equilibrium temperature model was also applied to the North Fork Tolt River Watershed in Washington, United States. The results indicate a reasonable simulation of stream temperature using the model proposed in this study, with minimum relative error values compared to the other two models

  1. A framework for modelling gene regulation which accommodates non-equilibrium mechanisms.

    Science.gov (United States)

    Ahsendorf, Tobias; Wong, Felix; Eils, Roland; Gunawardena, Jeremy

    2014-12-05

    Gene regulation has, for the most part, been quantitatively analysed by assuming that regulatory mechanisms operate at thermodynamic equilibrium. This formalism was originally developed to analyse the binding and unbinding of transcription factors from naked DNA in eubacteria. Although widely used, it has made it difficult to understand the role of energy-dissipating, epigenetic mechanisms, such as DNA methylation, nucleosome remodelling and post-translational modification of histones and co-regulators, which act together with transcription factors to regulate gene expression in eukaryotes. Here, we introduce a graph-based framework that can accommodate non-equilibrium mechanisms. A gene-regulatory system is described as a graph, which specifies the DNA microstates (vertices), the transitions between microstates (edges) and the transition rates (edge labels). The graph yields a stochastic master equation for how microstate probabilities change over time. We show that this framework has broad scope by providing new insights into three very different ad hoc models, of steroid-hormone responsive genes, of inherently bounded chromatin domains and of the yeast PHO5 gene. We find, moreover, surprising complexity in the regulation of PHO5, which has not yet been experimentally explored, and we show that this complexity is an inherent feature of being away from equilibrium. At equilibrium, microstate probabilities do not depend on how a microstate is reached but, away from equilibrium, each path to a microstate can contribute to its steady-state probability. Systems that are far from equilibrium thereby become dependent on history and the resulting complexity is a fundamental challenge. To begin addressing this, we introduce a graph-based concept of independence, which can be applied to sub-systems that are far from equilibrium, and prove that history-dependent complexity can be circumvented when sub-systems operate independently. As epigenomic data become increasingly

  2. Shape optimisation and performance analysis of flapping wings

    KAUST Repository

    Ghommem, Mehdi

    2012-09-04

    In this paper, shape optimisation of flapping wings in forward flight is considered. This analysis is performed by combining a local gradient-based optimizer with the unsteady vortex lattice method (UVLM). Although the UVLM applies only to incompressible, inviscid flows where the separation lines are known a priori, Persson et al. [1] showed through a detailed comparison between UVLM and higher-fidelity computational fluid dynamics methods for flapping flight that the UVLM schemes produce accurate results for attached flow cases and even remain trend-relevant in the presence of flow separation. As such, they recommended the use of an aerodynamic model based on UVLM to perform preliminary design studies of flapping wing vehicles Unlike standard computational fluid dynamics schemes, this method requires meshing of the wing surface only and not of the whole flow domain [2]. From the design or optimisation perspective taken in our work, it is fairly common (and sometimes entirely necessary, as a result of the excessive computational cost of the highest fidelity tools such as Navier-Stokes solvers) to rely upon such a moderate level of modelling fidelity to traverse the design space in an economical manner. The objective of the work, described in this paper, is to identify a set of optimised shapes that maximise the propulsive efficiency, defined as the ratio of the propulsive power over the aerodynamic power, under lift, thrust, and area constraints. The shape of the wings is modelled using B-splines, a technology used in the computer-aided design (CAD) field for decades. This basis can be used to smoothly discretize wing shapes with few degrees of freedom, referred to as control points. The locations of the control points constitute the design variables. The results suggest that changing the shape yields significant improvement in the performance of the flapping wings. The optimisation pushes the design to "bird-like" shapes with substantial increase in the time

  3. A Biomechanical Model of Single-joint Arm Movement Control Based on the Equilibrium Point Hypothesis

    OpenAIRE

    Masataka, SUZUKI; Yoshihiko, YAMAZAKI; Yumiko, TANIGUCHI; Department of Psychology, Kinjo Gakuin University; Department of Health and Physical Education, Nagoya Institute of Technology; College of Human Life and Environment, Kinjo Gakuin University

    2003-01-01

    SUZUKI,M., YAMAZAKI,Y. and TANIGUCHI,Y., A Biomechanical Model of Single-joint Arm Movement Control Based on the Equilibrium Point Hypothesis. Adv. Exerc. Sports Physiol., Vol.9, No.1 pp.7-25, 2003. According to the equilibrium point hypothesis of motor control, control action of muscles is not explicitly computed, but rather arises as a consequence of interaction among moving equilibrium point, reflex feedback and muscle mechanical properties. This approach is attractive as it obviates the n...

  4. The Matrix model, a driven state variables approach to non-equilibrium thermodynamics

    NARCIS (Netherlands)

    Jongschaap, R.J.J.

    2001-01-01

    One of the new approaches in non-equilibrium thermodynamics is the so-called matrix model of Jongschaap. In this paper some features of this model are discussed. We indicate the differences with the more common approach based upon internal variables and the more sophisticated Hamiltonian and GENERIC

  5. Optimising metadata workflows in a distributed information environment

    OpenAIRE

    Robertson, R. John; Barton, Jane

    2005-01-01

    The different purposes present within a distributed information environment create the potential for repositories to enhance their metadata by capitalising on the diversity of metadata available for any given object. This paper presents three conceptual reference models required to achieve this optimisation of metadata workflow: the ecology of repositories, the object lifecycle model, and the metadata lifecycle model. It suggests a methodology for developing the metadata lifecycle model, and ...

  6. Research on Duct Flow Field Optimisation of a Robot Vacuum Cleaner

    Directory of Open Access Journals (Sweden)

    Xiao-bo Lai

    2011-11-01

    Full Text Available The duct of a robot vacuum cleaner is the length of the flow channel between the inlet of the rolling brush blower and the outlet of the vacuum blower. To cope with the pressure drop problem of the duct flow field in a robot vacuum cleaner, a method based on Pressure Implicit with Splitting of Operators (PRISO algorithm is introduced and the optimisation design of the duct flow field is implemented. Firstly, the duct structure in a robot vacuum cleaner is taken as a research object, with the computational fluid dynamics (CFD theories adopted; a three-dimensional fluid model of the duct is established by means of the FLUENT solver of the CFD software. Secondly, with the k-∊ turbulence model of three-dimensional incompressible fluid considered and the PRISO pressure modification algorithm employed, the flow field numerical simulations inside the duct of the robot vacuum cleaner are carried out. Then, the velocity vector plots on the arbitrary plane of the duct flow field are obtained. Finally, an investigation of the dynamic characteristics of the duct flow field is done and defects of the original duct flow field are analysed, the optimisation of the original flow field has then been conducted. Experimental results show that the duct flow field after optimisation can effectively reduce pressure drop, the feasibility as well as the correctness of the theoretical modelling and optimisation approaches are validated.

  7. Research on Duct Flow Field Optimisation of a Robot Vacuum Cleaner

    Directory of Open Access Journals (Sweden)

    Xiao-bo Lai

    2011-11-01

    Full Text Available The duct of a robot vacuum cleaner is the length of the flow channel between the inlet of the rolling brush blower and the outlet of the vacuum blower. To cope with the pressure drop problem of the duct flow field in a robot vacuum cleaner, a method based on Pressure Implicit with Splitting of Operators (PRISO algorithm is introduced and the optimisation design of the duct flow field is implemented. Firstly, the duct structure in a robot vacuum cleaner is taken as a research object, with the computational fluid dynamics (CFD theories adopted; a three‐dimensional fluid model of the duct is established by means of the FLUENT solver of the CFD software. Secondly, with the k‐ε turbulence model of three‐ dimensional incompressible fluid considered and the PRISO pressure modification algorithm employed, the flow field numerical simulations inside the duct of the robot vacuum cleaner are carried out. Then, the velocity vector plots on the arbitrary plane of the duct flow field are obtained. Finally, an investigation of the dynamic characteristics of the duct flow field is done and defects of the original duct flow field are analysed, the optimisation of the original flow field has then been conducted. Experimental results show that the duct flow field after optimisation can effectively reduce pressure drop, the feasibility as well as the correctness of the theoretical modelling and optimisation approaches are validated.

  8. Optimising training data for ANNs with Genetic Algorithms

    OpenAIRE

    Kamp , R. G.; Savenije , H. H. G.

    2006-01-01

    International audience; Artificial Neural Networks (ANNs) have proved to be good modelling tools in hydrology for rainfall-runoff modelling and hydraulic flow modelling. Representative datasets are necessary for the training phase in which the ANN learns the model's input-output relations. Good and representative training data is not always available. In this publication Genetic Algorithms (GA) are used to optimise training datasets. The approach is tested with an existing hydraulic model in ...

  9. Optimising training data for ANNs with Genetic Algorithms

    OpenAIRE

    R. G. Kamp; R. G. Kamp; H. H. G. Savenije

    2006-01-01

    Artificial Neural Networks (ANNs) have proved to be good modelling tools in hydrology for rainfall-runoff modelling and hydraulic flow modelling. Representative datasets are necessary for the training phase in which the ANN learns the model's input-output relations. Good and representative training data is not always available. In this publication Genetic Algorithms (GA) are used to optimise training datasets. The approach is tested with an existing hydraulic model in The Netherlands. An...

  10. Dose optimisation in single plane interstitial brachytherapy

    DEFF Research Database (Denmark)

    Tanderup, Kari; Hellebust, Taran Paulsen; Honoré, Henriette Benedicte

    2006-01-01

    patients,       treated for recurrent rectal and cervical cancer, flexible catheters were       sutured intra-operatively to the tumour bed in areas with compromised       surgical margin. Both non-optimised, geometrically and graphically       optimised CT -based dose plans were made. The overdose index...... on the       regularity of the implant, such that the benefit of optimisation was       larger for irregular implants. OI and HI correlated strongly with target       volume limiting the usability of these parameters for comparison of dose       plans between patients. CONCLUSIONS: Dwell time optimisation significantly......BACKGROUND AND PURPOSE: Brachytherapy dose distributions can be optimised       by modulation of source dwell times. In this study dose optimisation in       single planar interstitial implants was evaluated in order to quantify the       potential benefit in patients. MATERIAL AND METHODS: In 14...

  11. Biofuels carbon footprints: Whole-systems optimisation for GHG emissions reduction.

    Science.gov (United States)

    Zamboni, Andrea; Murphy, Richard J; Woods, Jeremy; Bezzo, Fabrizio; Shah, Nilay

    2011-08-01

    A modelling approach for strategic design of ethanol production systems combining lifecycle analysis (LCA) and supply chain optimisation (SCO) can significantly contribute to assess their economic and environmental sustainability and to guide decision makers towards a more conscious implementation of ad hoc farming and processing practices. Most models applications so far have been descriptive in nature; the model proposed in this work is "normative" in that it aims to guide actions towards optimal outcomes (e.g. optimising the nitrogen balance through the whole supply chain). The modelling framework was conceived to steer strategic policies through a geographically specific design process considering economic and environmental criteria. Results shows how a crop management strategy devised from a whole systems perspective can significantly contribute to mitigate global warming even in first generation technologies. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Restructured electric power systems analysis of electricity markets with equilibrium models

    CERN Document Server

    2010-01-01

    Electricity market deregulation is driving the power energy production from a monopolistic structure into a competitive market environment. The development of electricity markets has necessitated the need to analyze market behavior and power. Restructured Electric Power Systems reviews the latest developments in electricity market equilibrium models and discusses the application of such models in the practical analysis and assessment of electricity markets.

  13. Regional disaster impact analysis: comparing Input-Output and Computable General Equilibrium models

    NARCIS (Netherlands)

    Koks, E.E.; Carrera, L.; Jonkeren, O.; Aerts, J.C.J.H.; Husby, T.G.; Thissen, M.; Standardi, G.; Mysiak, J.

    2016-01-01

    A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of

  14. Modelling of diffusion from equilibrium diffraction fluctuations in ordered phases

    International Nuclear Information System (INIS)

    Arapaki, E.; Argyrakis, P.; Tringides, M.C.

    2008-01-01

    Measurements of the collective diffusion coefficient D c at equilibrium are difficult because they are based on monitoring low amplitude concentration fluctuations generated spontaneously, that are difficult to measure experimentally. A new experimental method has been recently used to measure time-dependent correlation functions from the diffraction intensity fluctuations and was applied to measure thermal step fluctuations. The method has not been applied yet to measure superstructure intensity fluctuations in surface overlayers and to extract D c . With Monte Carlo simulations we study equilibrium fluctuations in Ising lattice gas models with nearest neighbor attractive and repulsive interactions. The extracted diffusion coefficients are compared to the ones obtained from equilibrium methods. The new results are in good agreement with the results from the other methods, i.e., D c decreases monotonically with coverage Θ for attractive interactions and increases monotonically with Θ for repulsive interactions. Even the absolute value of D c agrees well with the results obtained with the probe area method. These results confirm that this diffraction based method is a novel, reliable way to measure D c especially within the ordered region of the phase diagram when the superstructure spot has large intensity

  15. Electric Circuit Model Analogy for Equilibrium Lattice Relaxation in Semiconductor Heterostructures

    Science.gov (United States)

    Kujofsa, Tedi; Ayers, John E.

    2018-01-01

    The design and analysis of semiconductor strained-layer device structures require an understanding of the equilibrium profiles of strain and dislocations associated with mismatched epitaxy. Although it has been shown that the equilibrium configuration for a general semiconductor strained-layer structure may be found numerically by energy minimization using an appropriate partitioning of the structure into sublayers, such an approach is computationally intense and non-intuitive. We have therefore developed a simple electric circuit model approach for the equilibrium analysis of these structures. In it, each sublayer of an epitaxial stack may be represented by an analogous circuit configuration involving an independent current source, a resistor, an independent voltage source, and an ideal diode. A multilayered structure may be built up by the connection of the appropriate number of these building blocks, and the node voltages in the analogous electric circuit correspond to the equilibrium strains in the original epitaxial structure. This enables analysis using widely accessible circuit simulators, and an intuitive understanding of electric circuits can easily be extended to the relaxation of strained-layer structures. Furthermore, the electrical circuit model may be extended to continuously-graded epitaxial layers by considering the limit as the individual sublayer thicknesses are diminished to zero. In this paper, we describe the mathematical foundation of the electrical circuit model, demonstrate its application to several representative structures involving In x Ga1- x As strained layers on GaAs (001) substrates, and develop its extension to continuously-graded layers. This extension allows the development of analytical expressions for the strain, misfit dislocation density, critical layer thickness and widths of misfit dislocation free zones for a continuously-graded layer having an arbitrary compositional profile. It is similar to the transition from circuit

  16. Non-equilibrium synergistic effects in atmospheric pressure plasmas.

    Science.gov (United States)

    Guo, Heng; Zhang, Xiao-Ning; Chen, Jian; Li, He-Ping; Ostrikov, Kostya Ken

    2018-03-19

    Non-equilibrium is one of the important features of an atmospheric gas discharge plasma. It involves complicated physical-chemical processes and plays a key role in various actual plasma processing. In this report, a novel complete non-equilibrium model is developed to reveal the non-equilibrium synergistic effects for the atmospheric-pressure low-temperature plasmas (AP-LTPs). It combines a thermal-chemical non-equilibrium fluid model for the quasi-neutral plasma region and a simplified sheath model for the electrode sheath region. The free-burning argon arc is selected as a model system because both the electrical-thermal-chemical equilibrium and non-equilibrium regions are involved simultaneously in this arc plasma system. The modeling results indicate for the first time that it is the strong and synergistic interactions among the mass, momentum and energy transfer processes that determine the self-consistent non-equilibrium characteristics of the AP-LTPs. An energy transfer process related to the non-uniform spatial distributions of the electron-to-heavy-particle temperature ratio has also been discovered for the first time. It has a significant influence for self-consistently predicting the transition region between the "hot" and "cold" equilibrium regions of an AP-LTP system. The modeling results would provide an instructive guidance for predicting and possibly controlling the non-equilibrium particle-energy transportation process in various AP-LTPs in future.

  17. Oscillation Susceptibility Analysis of the ADMIRE Aircraft along the Path of Longitudinal Flight Equilibriums in Two Different Mathematical Models

    Directory of Open Access Journals (Sweden)

    Achim Ionita

    2009-01-01

    Full Text Available The oscillation susceptibility of the ADMIRE aircraft along the path of longitudinal flight equilibriums is analyzed numerically in the general and in a simplified flight model. More precisely, the longitudinal flight equilibriums, the stability of these equilibriums, and the existence of bifurcations along the path of these equilibriums are researched in both models. Maneuvers and appropriate piloting tasks for the touch-down moment are simulated in both models. The computed results obtained in the models are compared in order to see if the movement concerning the landing phase computed in the simplified model is similar to that computed in the general model. The similarity we find is not a proof of the structural stability of the simplified system, what as far we know never been made, but can increase the confidence that the simplified system correctly describes the real phenomenon.

  18. Mechanism of alkalinity lowering and chemical equilibrium model of high fly ash silica fume cement

    International Nuclear Information System (INIS)

    Hoshino, Seiichi; Honda, Akira; Negishi, Kumi

    2014-01-01

    The mechanism of alkalinity lowering of a High Fly ash Silica fume Cement (HFSC) under liquid/solid ratio conditions where the pH is largely controlled by the soluble alkali components (Region I) has been studied. This mechanism was incorporated in the chemical equilibrium model of HFSC. As a result, it is suggested that the dissolution and precipitation behavior of SO 4 2- partially contributes to alkalinity lowering of HFSC in Region I. A chemical equilibrium model of HFSC incorporating alkali (Na, K) adsorption, which was presumed as another contributing factor of the alkalinity lowering effect, was also developed, and an HFSC immersion experiment was analyzed using the model. The results of the developed model showed good agreement with the experiment results. From the above results, it was concluded that the alkalinity lowering of HFSC in Region I was attributed to both the dissolution and precipitation behavior of SO 4 2- and alkali adsorption, in addition to the absence of Ca(OH) 2 . A chemical equilibrium model of HFSC incorporating alkali and SO 4 2- adsorption was also proposed. (author)

  19. Reliability analysis and optimisation of subsea compression system facing operational covariate stresses

    International Nuclear Information System (INIS)

    Okaro, Ikenna Anthony; Tao, Longbin

    2016-01-01

    This paper proposes an enhanced Weibull-Corrosion Covariate model for reliability assessment of a system facing operational stresses. The newly developed model is applied to a Subsea Gas Compression System planned for offshore West Africa to predict its reliability index. System technical failure was modelled by developing a Weibull failure model incorporating a physically tested corrosion profile as stress in order to quantify the survival rate of the system under additional operational covariates including marine pH, temperature and pressure. Using Reliability Block Diagrams and enhanced Fusell-Vesely formulations, the whole system was systematically decomposed to sub-systems to analyse the criticality of each component and optimise them. Human reliability was addressed using an enhanced barrier weighting method. A rapid degradation curve is obtained on a subsea system relative to the base case subjected to a time-dependent corrosion stress factor. It reveals that subsea system components failed faster than their Mean time to failure specifications from Offshore Reliability Database as a result of cumulative marine stresses exertion. The case study demonstrated that the reliability of a subsea system can be systematically optimised by modelling the system under higher technical and organisational stresses, prioritising the critical sub-systems and making befitting provisions for redundancy and tolerances. - Highlights: • Novel Weibull Corrosion-Covariate model for reliability analysis of subsea assets. • Predict the accelerated degradation profile of a subsea gas compression. • An enhanced optimisation method based on Fusell-Vesely decomposition process. • New optimisation approach for smoothening of over- and under-designed components. • Demonstrated a significant improvement in producing more realistic failure rate.

  20. Exploring the Use of Multiple Analogical Models when Teaching and Learning Chemical Equilibrium

    Science.gov (United States)

    Harrison, Allan G.; De Jong, Onno

    2005-01-01

    This study describes the multiple analogical models used to introduce and teach Grade 12 chemical equilibrium. We examine the teacher's reasons for using models, explain each model's development during the lessons, and analyze the understandings students derived from the models. A case study approach was used and the data were drawn from the…

  1. Real-time optimisation of the Hoa Binh reservoir, Vietnam

    DEFF Research Database (Denmark)

    Richaud, Bertrand; Madsen, Henrik; Rosbjerg, Dan

    2011-01-01

    -time optimisation. First, the simulation-optimisation framework is applied for optimising reservoir operating rules. Secondly, real-time and forecast information is used for on-line optimisation that focuses on short-term goals, such as flood control or hydropower generation, without compromising the deviation...... in the downstream part of the Red River, and at the same time to increase hydropower generation and to save water for the dry season. The real-time optimisation procedure further improves the efficiency of the reservoir operation and enhances the flexibility for the decision-making. Finally, the quality......Multi-purpose reservoirs often have to be managed according to conflicting objectives, which requires efficient tools for trading-off the objectives. This paper proposes a multi-objective simulation-optimisation approach that couples off-line rule curve optimisation with on-line real...

  2. Acoustic Resonator Optimisation for Airborne Particle Manipulation

    Science.gov (United States)

    Devendran, Citsabehsan; Billson, Duncan R.; Hutchins, David A.; Alan, Tuncay; Neild, Adrian

    Advances in micro-electromechanical systems (MEMS) technology and biomedical research necessitate micro-machined manipulators to capture, handle and position delicate micron-sized particles. To this end, a parallel plate acoustic resonator system has been investigated for the purposes of manipulation and entrapment of micron sized particles in air. Numerical and finite element modelling was performed to optimise the design of the layered acoustic resonator. To obtain an optimised resonator design, careful considerations of the effect of thickness and material properties are required. Furthermore, the effect of acoustic attenuation which is dependent on frequency is also considered within this study, leading to an optimum operational frequency range. Finally, experimental results demonstrated good particle levitation and capture of various particle properties and sizes ranging to as small as 14.8 μm.

  3. Tracer disposition kinetics in the determination of local cerebral blood flow by a venous equilibrium model, tube model, and distributed model

    International Nuclear Information System (INIS)

    Sawada, Y.; Sugiyama, Y.; Iga, T.; Hanano, M.

    1987-01-01

    Tracer distribution kinetics in the determination of local cerebral blood flow (LCBF) were examined by using three models, i.e., venous equilibrium, tube, and distributed models. The technique most commonly used for measuring LCBF is the tissue uptake method, which was first developed and applied by Kety. The measurement of LCBF with the 14 C-iodoantipyrine (IAP) method is calculated by using an equation derived by Kety based on the Fick's principle and a two-compartment model of blood-tissue exchange and tissue concentration at a single data point. The procedure, in which the tissue is to be in equilibrium with venous blood, will be referred to as the tissue equilibration model. In this article, effects of the concentration gradient of tracer along the length of the capillary (tube model) and the transverse heterogeneity in the capillary transit time (distributed model) on the determination of LCBF were theoretically analyzed for the tissue sampling method. Similarities and differences among these models are explored. The rank order of the LCBF calculated by using arterial blood concentration time courses and the tissue concentration of tracer based on each model were tube model (model II) less than distributed model (model III) less than venous equilibrium model (model I). Data on 14 C-IAP kinetics reported by Ohno et al. were employed. The LCBFs calculated based on model I were 45-260% larger than those in models II or III. To discriminate among three models, we propose to examine the effect of altering the venous infusion time of tracer on the apparent tissue-to-blood concentration ratio (lambda app). A range of the ratio of the predicted lambda app in models II or III to that in model I was from 0.6 to 1.3

  4. The negotiated equilibrium model of spinal cord function.

    Science.gov (United States)

    Wolpaw, Jonathan R

    2018-04-16

    The belief that the spinal cord is hardwired is no longer tenable. Like the rest of the CNS, the spinal cord changes during growth and aging, when new motor behaviours are acquired, and in response to trauma and disease. This paper describes a new model of spinal cord function that reconciles its recently appreciated plasticity with its long recognized reliability as the final common pathway for behaviour. According to this model, the substrate of each motor behaviour comprises brain and spinal plasticity: the plasticity in the brain induces and maintains the plasticity in the spinal cord. Each time a behaviour occurs, the spinal cord provides the brain with performance information that guides changes in the substrate of the behaviour. All the behaviours in the repertoire undergo this process concurrently; each repeatedly induces plasticity to preserve its key features despite the plasticity induced by other behaviours. The aggregate process is a negotiation among the behaviours: they negotiate the properties of the spinal neurons and synapses that they all use. The ongoing negotiation maintains the spinal cord in an equilibrium - a negotiated equilibrium - that serves all the behaviours. This new model of spinal cord function is supported by laboratory and clinical data, makes predictions borne out by experiment, and underlies a new approach to restoring function to people with neuromuscular disorders. Further studies are needed to test its generality, to determine whether it may apply to other CNS areas such as the cerebral cortex, and to develop its therapeutic implications. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  5. Cap-and-Trade Modeling and Analysis: Congested Electricity Market Equilibrium

    Science.gov (United States)

    Limpaitoon, Tanachai

    This dissertation presents an equilibrium framework for analyzing the impact of cap-and-trade regulation on transmission-constrained electricity market. The cap-and-trade regulation of greenhouse gas emissions has gained momentum in the past decade. The impact of the regulation and its efficacy in the electric power industry depend on interactions of demand elasticity, transmission network, market structure, and strategic behavior of firms. I develop an equilibrium model of an oligopoly electricity market in conjunction with a market for tradable emissions permits to study the implications of such interactions. My goal is to identify inefficiencies that may arise from policy design elements and to avoid any unintended adverse consequences on the electric power sector. I demonstrate this modeling framework with three case studies examining the impact of carbon cap-and-trade regulation. In the first case study, I study equilibrium results under various scenarios of resource ownership and emission targets using a 24-bus IEEE electric transmission system. The second and third case studies apply the equilibrium model to a realistic electricity market, Western Electricity Coordinating Council (WECC) 225-bus system with a detailed representation of the California market. In the first and second case studies, I examine oligopoly in electricity with perfect competition in the permit market. I find that under a stringent emission cap and a high degree of concentration of non-polluting firms, the electricity market is subject to potential abuses of market power. Also, market power can occur in the procurement of non-polluting energy through the permit market when non-polluting resources are geographically concentrated in a transmission-constrained market. In the third case study, I relax the competitive market structure assumption of the permit market by allowing oligopolistic competition in the market through a conjectural variation approach. A short-term equilibrium

  6. Efficient topology optimisation of multiscale and multiphysics problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe

    The aim of this Thesis is to present efficient methods for optimising high-resolution problems of a multiscale and multiphysics nature. The Thesis consists of two parts: one treating topology optimisation of microstructural details and the other treating topology optimisation of conjugate heat...

  7. Numerical optimisation of friction stir welding: review of future challenges

    DEFF Research Database (Denmark)

    Tutum, Cem Celal; Hattel, Jesper Henri

    2011-01-01

    During the last decade, the combination of increasingly more advanced numerical simulation software with high computational power has resulted in models for friction stir welding (FSW), which have improved the understanding of the determining physical phenomena behind the process substantially....... This has made optimisation of certain process parameters possible and has in turn led to better performing friction stir welded products, thus contributing to a general increase in the popularity of the process and its applications. However, most of these optimisation studies do not go well beyond manual...

  8. Methods for Optimisation of the Laser Cutting Process

    DEFF Research Database (Denmark)

    Dragsted, Birgitte

    This thesis deals with the adaptation and implementation of various optimisation methods, in the field of experimental design, for the laser cutting process. The problem in optimising the laser cutting process has been defined and a structure for at Decision Support System (DSS......) for the optimisation of the laser cutting process has been suggested. The DSS consists of a database with the currently used and old parameter settings. Also one of the optimisation methods has been implemented in the DSS in order to facilitate the optimisation procedure for the laser operator. The Simplex Method has...... been adapted in two versions. A qualitative one, that by comparing the laser cut items optimise the process and a quantitative one that uses a weighted quality response in order to achieve a satisfactory quality and after that maximises the cutting speed thus increasing the productivity of the process...

  9. Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation

    Directory of Open Access Journals (Sweden)

    Benjamin Scellier

    2017-05-01

    Full Text Available We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made and the second phase of training (after the target or prediction error is revealed. Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point or stationary distribution toward a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged toward their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal “back-propagated” during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST

  10. Pharmaceutical industry and trade liberalization using computable general equilibrium model.

    Science.gov (United States)

    Barouni, M; Ghaderi, H; Banouei, Aa

    2012-01-01

    Computable general equilibrium models are known as a powerful instrument in economic analyses and widely have been used in order to evaluate trade liberalization effects. The purpose of this study was to provide the impacts of trade openness on pharmaceutical industry using CGE model. Using a computable general equilibrium model in this study, the effects of decrease in tariffs as a symbol of trade liberalization on key variables of Iranian pharmaceutical products were studied. Simulation was performed via two scenarios in this study. The first scenario was the effect of decrease in tariffs of pharmaceutical products as 10, 30, 50, and 100 on key drug variables, and the second was the effect of decrease in other sectors except pharmaceutical products on vital and economic variables of pharmaceutical products. The required data were obtained and the model parameters were calibrated according to the social accounting matrix of Iran in 2006. The results associated with simulation demonstrated that the first scenario has increased import, export, drug supply to markets and household consumption, while import, export, supply of product to market, and household consumption of pharmaceutical products would averagely decrease in the second scenario. Ultimately, society welfare would improve in all scenarios. We presents and synthesizes the CGE model which could be used to analyze trade liberalization policy issue in developing countries (like Iran), and thus provides information that policymakers can use to improve the pharmacy economics.

  11. Monte Carlo modeling of Lead-Cooled Fast Reactor in adiabatic equilibrium state

    Energy Technology Data Exchange (ETDEWEB)

    Stanisz, Przemysław, E-mail: pstanisz@agh.edu.pl; Oettingen, Mikołaj, E-mail: moettin@agh.edu.pl; Cetnar, Jerzy, E-mail: cetnar@mail.ftj.agh.edu.pl

    2016-05-15

    Graphical abstract: - Highlights: • We present the Monte Carlo modeling of the LFR in the adiabatic equilibrium state. • We assess the adiabatic equilibrium fuel composition using the MCB code. • We define the self-adjusting process of breeding gain by the control rod operation. • The designed LFR can work in the adiabatic cycle with zero fuel breeding. - Abstract: Nuclear power would appear to be the only energy source able to satisfy the global energy demand while also achieving a significant reduction of greenhouse gas emissions. Moreover, it can provide a stable and secure source of electricity, and plays an important role in many European countries. However, nuclear power generation from its birth has been doomed by the legacy of radioactive nuclear waste. In addition, the looming decrease in the available resources of fissile U235 may influence the future sustainability of nuclear energy. The integrated solution to both problems is not trivial, and postulates the introduction of a closed-fuel cycle strategy based on breeder reactors. The perfect choice of a novel reactor system fulfilling both requirements is the Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state. In such a state, the reactor converts depleted or natural uranium into plutonium while consuming any self-generated minor actinides and transferring only fission products as waste. We present the preliminary design of a Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state with the Monte Carlo Continuous Energy Burnup Code – MCB. As a reference reactor model we apply the core design developed initially under the framework of the European Lead-cooled SYstem (ELSY) project and refined in the follow-up Lead-cooled European Advanced DEmonstration Reactor (LEADER) project. The major objective of the study is to show to what extent the constraints of the adiabatic cycle are maintained and to indicate the phase space for further improvements. The analysis

  12. Optimising Transport Decision Making using Customised Decision Models and Decision Conferences

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn

    The subject of this Ph.D. thesis entitled “Optimising Transport Decision Making using Customised Decision Models and Decision Conferences” is multi-criteria decision analysis (MCDA) and decision support in the context of transport infrastructure assessments. Despite the fact that large amounts...... is concerned with the insufficiency of conventional cost-benefit analysis (CBA), and proposes the use of MCDA as a supplementing tool in order to also capture impacts of a more strategic character in the appraisals and hence make more use of the often large efforts put in the preliminary examinations. MCDA...... and rail to bike transport projects. Two major concerns have been to propose an examination process that can be used in situations where complex decision problems need to be addressed by experts as well as non-experts in decision making, and to identify appropriate assessment techniques to be used...

  13. An optimisation approach for capacity planning: modelling insights and empirical findings from a tactical perspective

    Directory of Open Access Journals (Sweden)

    Andréa Nunes Carvalho

    2017-09-01

    Full Text Available Abstract The academic literature presents a research-practice gap on the application of decision support tools to address tactical planning problems in real-world organisations. This paper addresses this gap and extends a previous action research relative to an optimisation model applied for tactical capacity planning in an engineer-to-order industrial setting. The issues discussed herein raise new insights to better understand the practical results that can be achieved through the proposed model. The topics presented include the modelling of objectives, the representation of the production process and the costing approach, as well as findings regarding managerial decisions and the scope of action considered. These insights may inspire ideas to academics and practitioners when developing tools for capacity planning problems in similar contexts.

  14. Ageing in the trap model as a relaxation further away from equilibrium

    International Nuclear Information System (INIS)

    Bertin, Eric

    2013-01-01

    The ageing regime of the trap model, observed for a temperature T below the glass transition temperature T g , is a prototypical example of non-stationary out-of-equilibrium state. We characterize this state by evaluating its ‘distance to equilibrium’, defined as the Shannon entropy difference ΔS (in absolute value) between the non-equilibrium state and the equilibrium state with the same energy. We consider the time evolution of ΔS and show that, rather unexpectedly, ΔS(t) continuously increases in the ageing regime, if the number of traps is infinite, meaning that the ‘distance to equilibrium’ increases instead of decreasing in the relaxation process. For a finite number N of traps, ΔS(t) exhibits a maximum value before eventually converging to zero when equilibrium is reached. The time t* at which the maximum is reached however scales in a non-standard way as t * ∼N T g /2T , while the equilibration time scales as τ eq ∼N T g /T . In addition, the curves ΔS(t) for different N are found to rescale as ln t/ln t*, instead of the more familiar scaling t/t*. (paper)

  15. International nuclear model and code comparison on pre-equilibrium effects

    International Nuclear Information System (INIS)

    Gruppelaar, H.; van der Kamp, H.A.J.; Nagel, P.

    1983-01-01

    This paper gives the specification of an intercomparison of statistical nuclear models and codes with emphasis on pre-equilibrium effects. It is partly based upon the conclusions of a meeting of an ad-hoc working group on this subject. The parameters studied are: masses, Q values, level scheme data, optical model parameters, X-ray competition parameters, total level-density specifications, for 86 Rb, 89 Sr, 90 Y, 92 Y, 92 Zr, 93 Zr, 89 Y, 91 Nb, 92 Nb and 93 Nb

  16. Isotope effects in the equilibrium and non-equilibrium vaporization of tritiated water and ice

    International Nuclear Information System (INIS)

    Baumgaertner, F.; Kim, M.-A.

    1990-01-01

    The vaporization isotope effect of the HTO/H 2 O system has been measured at various temperatures and pressures under equilibrium as well as non-equilibrium conditions. The isotope effect values measured in equilibrium sublimation or distillation are in good agreement with the theoretical values based on the harmonic oscillator model. In non-equilibrium vaporization at low temperatures ( 0 C), the isotope effect decreases rapidly with decreasing system pressure and becomes negligible when the system pressure is lowered more than one tenth of the equilibrium vapor pressure. At higher temperatures, the isotope effect decreases very slowly with decreasing system pressure. Discussion is extended for the application of the present results to the study of biological enrichment of tritium. (author)

  17. Burnup effect on nuclear fuel cycle cost using an equilibrium model

    International Nuclear Information System (INIS)

    Youn, S. R.; Kim, S. K.; Ko, W. I.

    2014-01-01

    The degree of fuel burnup is an important technical parameter to the nuclear fuel cycle, being sensitive and progressive to reduce the total volume of process flow materials and eventually cut the nuclear fuel cycle costs. This paper performed the sensitivity analysis of the total nuclear fuel cycle costs to changes in the technical parameter by varying the degree of burnups in each of the three nuclear fuel cycles using an equilibrium model. Important as burnup does, burnup effect was used among the cost drivers of fuel cycle, as the technical parameter. The fuel cycle options analyzed in this paper are three different fuel cycle options as follows: PWR-Once Through Cycle(PWR-OT), PWR-MOX Recycle, Pyro-SFR Recycle. These fuel cycles are most likely to be adopted in the foreseeable future. As a result of the sensitivity analysis on burnup effect of each three different nuclear fuel cycle costs, PWR-MOX turned out to be the most influenced by burnup changes. Next to PWR-MOX cycle, in the order of Pyro-SFR and PWR-OT cycle turned out to be influenced by the degree of burnup. In conclusion, the degree of burnup in the three nuclear fuel cycles can act as the controlling driver of nuclear fuel cycle costs due to a reduction in the volume of spent fuel leading better availability and capacity factors. However, the equilibrium model used in this paper has a limit that time-dependent material flow and cost calculation is impossible. Hence, comparative analysis of the results calculated by dynamic model hereafter and the calculation results using an equilibrium model should be proceed. Moving forward to the foreseeable future with increasing burnups, further studies regarding alternative material of high corrosion resistance fuel cladding for the overall

  18. DAE Tools: equation-based object-oriented modelling, simulation and optimisation software

    Directory of Open Access Journals (Sweden)

    Dragan D. Nikolić

    2016-04-01

    Full Text Available In this work, DAE Tools modelling, simulation and optimisation software, its programming paradigms and main features are presented. The current approaches to mathematical modelling such as the use of modelling languages and general-purpose programming languages are analysed. The common set of capabilities required by the typical simulation software are discussed, and the shortcomings of the current approaches recognised. A new hybrid approach is introduced, and the modelling languages and the hybrid approach are compared in terms of the grammar, compiler, parser and interpreter requirements, maintainability and portability. The most important characteristics of the new approach are discussed, such as: (1 support for the runtime model generation; (2 support for the runtime simulation set-up; (3 support for complex runtime operating procedures; (4 interoperability with the third party software packages (i.e. NumPy/SciPy; (5 suitability for embedding and use as a web application or software as a service; and (6 code-generation, model exchange and co-simulation capabilities. The benefits of an equation-based approach to modelling, implemented in a fourth generation object-oriented general purpose programming language such as Python are discussed. The architecture and the software implementation details as well as the type of problems that can be solved using DAE Tools software are described. Finally, some applications of the software at different levels of abstraction are presented, and its embedding capabilities and suitability for use as a software as a service is demonstrated.

  19. Extending Particle Swarm Optimisers with Self-Organized Criticality

    DEFF Research Database (Denmark)

    Løvbjerg, Morten; Krink, Thiemo

    2002-01-01

    Particle swarm optimisers (PSOs) show potential in function optimisation, but still have room for improvement. Self-organized criticality (SOC) can help control the PSO and add diversity. Extending the PSO with SOC seems promising reaching faster convergence and better solutions.......Particle swarm optimisers (PSOs) show potential in function optimisation, but still have room for improvement. Self-organized criticality (SOC) can help control the PSO and add diversity. Extending the PSO with SOC seems promising reaching faster convergence and better solutions....

  20. Utility systems operation: Optimisation-based decision making

    International Nuclear Information System (INIS)

    Velasco-Garcia, Patricia; Varbanov, Petar Sabev; Arellano-Garcia, Harvey; Wozny, Guenter

    2011-01-01

    Utility systems provide heat and power to industrial sites. The importance of operating these systems in an optimal way has increased significantly due to the unstable and in the long term rising prices of fossil fuels as well as the need for reducing the greenhouse gas emissions. This paper presents an analysis of the problem for supporting operator decision making under conditions of variable steam demands from the production processes on an industrial site. An optimisation model has been developed, where besides for running the utility system, also the costs associated with starting-up the operating units have been modelled. The illustrative case study shows that accounting for the shut-downs and start-ups of utility operating units can bring significant cost savings. - Highlights: → Optimisation methodology for decision making on running utility systems. → Accounting for varying steam demands. → Optimal operating specifications when a demand change occurs. → Operating costs include start-up costs of boilers and other units. → Validated on a real-life case study. Up to 20% cost savings are possible.

  1. KEMOD: A mixed chemical kinetic and equilibrium model of aqueous and solid phase geochemical reactions

    International Nuclear Information System (INIS)

    Yeh, G.T.; Iskra, G.A.

    1995-01-01

    This report presents the development of a mixed chemical Kinetic and Equilibrium MODel in which every chemical species can be treated either as a equilibrium-controlled or as a kinetically controlled reaction. The reaction processes include aqueous complexation, adsorption/desorption, ion exchange, precipitation/dissolution, oxidation/reduction, and acid/base reactions. Further development and modification of KEMOD can be made in: (1) inclusion of species switching solution algorithms, (2) incorporation of the effect of temperature and pressure on equilibrium and rate constants, and (3) extension to high ionic strength

  2. The optimisation of wedge filters in radiotherapy of the prostate

    International Nuclear Information System (INIS)

    Oldham, Mark; Neal, Anthony J.; Webb, Steve

    1995-01-01

    A treatment plan optimisation algorithm has been applied to 12 patients with early prostate cancer in order to determine the optimum beam-weights and wedge angles for a standard conformal three-field treatment technique. The optimisation algorithm was based on fast-simulated-annealing using a cost function designed to achieve a uniform dose in the planning-target-volume (PTV) and to minimise the integral doses to the organs-at-risk. The algorithm has been applied to standard conformal three-field plans created by an experienced human planner, and run in three PLAN MODES: (1) where the wedge angles were fixed by the human planner and only the beam-weights were optimised; (2) where both the wedge angles and beam-weights were optimised; and (3) where both the wedge angles and beam-weights were optimised and a non-uniform dose was prescribed to the PTV. In the latter PLAN MODE, a uniform 100% dose was prescribed to all of the PTV except for that region that overlaps with the rectum where a lower (e.g., 90%) dose was prescribed. The resulting optimised plans have been compared with those of the human planner who found beam-weights by conventional forward planning techniques. Plans were compared on the basis of dose statistics, normal-tissue-complication-probability (NTCP) and tumour-control-probability (TCP). The results of the comparison showed that all three PLAN MODES produced plans with slightly higher TCP for the same rectal NTCP, than the human planner. The best results were observed for PLAN MODE 3, where an average increase in TCP of 0.73% (± 0.20, 95% confidence interval) was predicted by the biological models. This increase arises from a beneficial dose gradient which is produced across the tumour. Although the TCP gain is small it comes with no increase in treatment complexity, and could translate into increased cures given the large numbers of patients being referred. A study of the beam-weights and wedge angles chosen by the optimisation algorithm revealed

  3. Optimisation of groundwater level monitoring networks using geostatistical modelling based on the Spartan family variogram and a genetic algorithm method

    Science.gov (United States)

    Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2016-04-01

    Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the

  4. Process and Economic Optimisation of a Milk Processing Plant with Solar Thermal Energy

    DEFF Research Database (Denmark)

    Bühler, Fabian; Nguyen, Tuong-Van; Elmegaard, Brian

    2016-01-01

    . Based on the case study of a dairy factory, where first a heat integration is performed to optimise the system, a model for solar thermal process integration is developed. The detailed model is based on annual hourly global direct and diffuse solar radiation, from which the radiation on a defined......This work investigates the integration of solar thermal systems for process energy use. A shift from fossil fuels to renewable energy could be beneficial both from environmental and economic perspectives, after the process itself has been optimised and efficiency measures have been implemented...... surface is calculated. Based on hourly process stream data from the dairy factory, the optimal streams for solar thermal process integration are found, with an optimal thermal storagetank volume. The last step consists of an economic optimisation of the problem to determine the optimal size...

  5. Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.

    Science.gov (United States)

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  6. The equilibrium response to doubling atmospheric CO2

    International Nuclear Information System (INIS)

    Mitchell, J.F.B.

    1990-01-01

    The equilibrium response of climate to increased atmospheric carbon dioxide as simulated by general circulation models is assessed. Changes that are physically plausible are summarized, along with an indication of the confidence attributable to those changes. The main areas of uncertainty are highlighted. They include: equilibrium experiments with mixed-layer oceans focusing on temperature, precipitation, and soil moisture; equilibrium studies with dynamical ocean-atmosphere models; results deduced from equilibrium CO 2 experiments; and priorities for future research to improve atmosphere models

  7. Statistical Optimisation of Fermentation Conditions for Citric Acid ...

    African Journals Online (AJOL)

    This study investigated the optimisation of fermentation conditions during citric acid production via solid state fermentation (SSF) of pineapple peels using Aspergillus niger. A three-variable, three-level Box-Behnken design (BBD) comprising 17 experimental runs was used to develop a statistical model for the fermentation ...

  8. Layout Optimisation of Wave Energy Converter Arrays

    Directory of Open Access Journals (Sweden)

    Pau Mercadé Ruiz

    2017-08-01

    Full Text Available This paper proposes an optimisation strategy for the layout design of wave energy converter (WEC arrays. Optimal layouts are sought so as to maximise the absorbed power given a minimum q-factor, the minimum distance between WECs, and an area of deployment. To guarantee an efficient optimisation, a four-parameter layout description is proposed. Three different optimisation algorithms are further compared in terms of performance and computational cost. These are the covariance matrix adaptation evolution strategy (CMA, a genetic algorithm (GA and the glowworm swarm optimisation (GSO algorithm. The results show slightly higher performances for the latter two algorithms; however, the first turns out to be significantly less computationally demanding.

  9. A hierarchical analysis of terrestrial ecosystem model Biome-BGC: Equilibrium analysis and model calibration

    Energy Technology Data Exchange (ETDEWEB)

    Thornton, Peter E [ORNL; Wang, Weile [ORNL; Law, Beverly E. [Oregon State University; Nemani, Ramakrishna R [NASA Ames Research Center

    2009-01-01

    The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically support the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.

  10. Ginsburg criterion for an equilibrium superradiant model in the dynamic approach

    International Nuclear Information System (INIS)

    Trache, M.

    1991-10-01

    Some critical properties of an equilibrium superradiant model are discussed, taking into account the quantum fluctuations of the field variables. The critical region is calculated using the Ginsburg criterion, underlining the role of the atomic concentration as a control parameter of the phase transition. (author). 16 refs, 1 fig

  11. Coenzyme B12 model studies: Equilibrium constants for the pH ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Chemical Sciences; Volume 114; Issue 1. Coenzyme B12 model studies: Equilibrium constants for the H-dependent axial ligation of benzyl(aquo)cobaloxime by various N- and S-donor ligands. D Sudarshan Reddy N Ravi Kumar Reddy V Sridhar S Satyanarayana. Inorganic and Analytical ...

  12. Stochastic Optimisation of Battery System Operation Strategy under different Utility Tariff Structures

    OpenAIRE

    Erdal, Jørgen Sørgård

    2017-01-01

    This master thesis develops a stochastic optimisation software for household grid-connected batteries combined with PV-systems. The objective of the optimisation is to operate the battery system in order to minimise the costs of the consumer, and it was implemented in MATLAB using a self-written stochastic dynamic programming algorithm. Load was considered as a stochastic variable and modelled as a Markov Chain. Transition probabilities between time steps were calculated using historic load p...

  13. Probabilistic sensitivity analysis of optimised preventive maintenance strategies for deteriorating infrastructure assets

    International Nuclear Information System (INIS)

    Daneshkhah, A.; Stocks, N.G.; Jeffrey, P.

    2017-01-01

    Efficient life-cycle management of civil infrastructure systems under continuous deterioration can be improved by studying the sensitivity of optimised preventive maintenance decisions with respect to changes in model parameters. Sensitivity analysis in maintenance optimisation problems is important because if the calculation of the cost of preventive maintenance strategies is not sufficiently robust, the use of the maintenance model can generate optimised maintenances strategies that are not cost-effective. Probabilistic sensitivity analysis methods (particularly variance based ones), only partially respond to this issue and their use is limited to evaluating the extent to which uncertainty in each input contributes to the overall output's variance. These methods do not take account of the decision-making problem in a straightforward manner. To address this issue, we use the concept of the Expected Value of Perfect Information (EVPI) to perform decision-informed sensitivity analysis: to identify the key parameters of the problem and quantify the value of learning about certain aspects of the life-cycle management of civil infrastructure system. This approach allows us to quantify the benefits of the maintenance strategies in terms of expected costs and in the light of accumulated information about the model parameters and aspects of the system, such as the ageing process. We use a Gamma process model to represent the uncertainty associated with asset deterioration, illustrating the use of EVPI to perform sensitivity analysis on the optimisation problem for age-based and condition-based preventive maintenance strategies. The evaluation of EVPI indices is computationally demanding and Markov Chain Monte Carlo techniques would not be helpful. To overcome this computational difficulty, we approximate the EVPI indices using Gaussian process emulators. The implications of the worked numerical examples discussed in the context of analytical efficiency and organisational

  14. Approach to chemical equilibrium in thermal models

    International Nuclear Information System (INIS)

    Boal, D.H.

    1984-01-01

    The experimentally measured (μ - , charged particle)/(μ - ,n) and (p,n/p,p') ratios for the emission of energetic nucleons are used to estimate the time evolution of a system of secondary nucleons produced in a direct interaction of a projectile or captured muon. The values of these ratios indicate that chemical equilibrium is not achieved among the secondary nucleons in noncomposite induced reactions, and this restricts the time scale for the emission of energetic nucleons to be about 0.7 x 10 -23 sec. It is shown that the reason why thermal equilibrium can be reached so rapidly for a particular nucleon species is that the sum of the particle spectra produced in multiple direct reactions looks surprisingly thermal. The rate equations used to estimate the reaction times for muon and nucleon induced reactions are then applied to heavy ion collisions, and it is shown that chemical equilibrium can be reached more rapidly, as one would expect

  15. Production optimisation in the petrochemical industry by hierarchical multivariate modelling. Phase 2: On-line implementation

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, Aasa; Persson, Fredrik; Andersson, Magnus

    2009-07-15

    IVL, together with Emerson Process Management, has developed a decision support system (DSS) based on multivariate statistical process models. The system was implemented at Nynas AB's refinery in order to provide real-time TBP curves and to enable the operator to optimise the process with regards to product quality and energy consumption. The project resulted in the following proven benefits at the industrial reference site, Nynas Refinery in Gothenburg: - Increased yield with up to 14 % (relative terms) for the most valuable product - Decreased energy consumption of 8 %. Validation of model predictions compared to the laboratory analysis showed that the prediction error lay within 1 deg C throughout the whole test period

  16. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    Science.gov (United States)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  17. A comparison of forward planning and optimised inverse planning

    International Nuclear Information System (INIS)

    Oldham, Mark; Neal, Anthony; Webb, Steve

    1995-01-01

    A radiotherapy treatment plan optimisation algorithm has been applied to 48 prostate plans and the results compared with those of an experienced human planner. Twelve patients were used in the study, and a 3, 4, 6 and 8 field plan (with standard coplanar beam angles for each plan type) were optimised by both the human planner and the optimisation algorithm. The human planner 'optimised' the plan by conventional forward planning techniques. The optimisation algorithm was based on fast-simulated-annealing. 'Importance factors' assigned to different regions of the patient provide a method for controlling the algorithm, and it was found that the same values gave good results for almost all plans. The plans were compared on the basis of dose statistics and normal-tissue-complication-probability (NTCP) and tumour-control-probability (TCP). The results show that the optimisation algorithm yielded results that were at least as good as the human planner for all plan types, and on the whole slightly better. A study of the beam-weights chosen by the optimisation algorithm and the planner will be presented. The optimisation algorithm showed greater variation, in response to individual patient geometry. For simple (e.g. 3 field) plans it was found to consistently achieve slightly higher TCP and lower NTCP values. For more complicated (e.g. 8 fields) plans the optimisation also achieved slightly better results with generally less numbers of beams. The optimisation time was always ≤5 minutes; a factor of up to 20 times faster than the human planner

  18. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    Science.gov (United States)

    Liu, Yen; Panesi, Marco; Sahai, Amal; Vinokur, Marcel

    2015-04-01

    This paper opens a new door to macroscopic modeling for thermal and chemical non-equilibrium. In a game-changing approach, we discard conventional theories and practices stemming from the separation of internal energy modes and the Landau-Teller relaxation equation. Instead, we solve the fundamental microscopic equations in their moment forms but seek only optimum representations for the microscopic state distribution function that provides converged and time accurate solutions for certain macroscopic quantities at all times. The modeling makes no ad hoc assumptions or simplifications at the microscopic level and includes all possible collisional and radiative processes; it therefore retains all non-equilibrium fluid physics. We formulate the thermal and chemical non-equilibrium macroscopic equations and rate coefficients in a coupled and unified fashion for gases undergoing completely general transitions. All collisional partners can have internal structures and can change their internal energy states after transitions. The model is based on the reconstruction of the state distribution function. The internal energy space is subdivided into multiple groups in order to better describe non-equilibrium state distributions. The logarithm of the distribution function in each group is expressed as a power series in internal energy based on the maximum entropy principle. The method of weighted residuals is applied to the microscopic equations to obtain macroscopic moment equations and rate coefficients succinctly to any order. The model's accuracy depends only on the assumed expression of the state distribution function and the number of groups used and can be self-checked for accuracy and convergence. We show that the macroscopic internal energy transfer, similar to mass and momentum transfers, occurs through nonlinear collisional processes and is not a simple relaxation process described by, e.g., the Landau-Teller equation. Unlike the classical vibrational energy

  19. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures.

    Science.gov (United States)

    Liu, Yen; Panesi, Marco; Sahai, Amal; Vinokur, Marcel

    2015-04-07

    This paper opens a new door to macroscopic modeling for thermal and chemical non-equilibrium. In a game-changing approach, we discard conventional theories and practices stemming from the separation of internal energy modes and the Landau-Teller relaxation equation. Instead, we solve the fundamental microscopic equations in their moment forms but seek only optimum representations for the microscopic state distribution function that provides converged and time accurate solutions for certain macroscopic quantities at all times. The modeling makes no ad hoc assumptions or simplifications at the microscopic level and includes all possible collisional and radiative processes; it therefore retains all non-equilibrium fluid physics. We formulate the thermal and chemical non-equilibrium macroscopic equations and rate coefficients in a coupled and unified fashion for gases undergoing completely general transitions. All collisional partners can have internal structures and can change their internal energy states after transitions. The model is based on the reconstruction of the state distribution function. The internal energy space is subdivided into multiple groups in order to better describe non-equilibrium state distributions. The logarithm of the distribution function in each group is expressed as a power series in internal energy based on the maximum entropy principle. The method of weighted residuals is applied to the microscopic equations to obtain macroscopic moment equations and rate coefficients succinctly to any order. The model's accuracy depends only on the assumed expression of the state distribution function and the number of groups used and can be self-checked for accuracy and convergence. We show that the macroscopic internal energy transfer, similar to mass and momentum transfers, occurs through nonlinear collisional processes and is not a simple relaxation process described by, e.g., the Landau-Teller equation. Unlike the classical vibrational energy

  20. Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions

    Directory of Open Access Journals (Sweden)

    Kian Sheng Lim

    2013-01-01

    Full Text Available The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  1. Comments on equilibrium, transient equilibrium, and secular equilibrium in serial radioactive decay

    International Nuclear Information System (INIS)

    Prince, J.R.

    1979-01-01

    Equations describing serial radioactive decay are reviewed along with published descriptions or transient and secular equilibrium. It is shown that terms describing equilibrium are not used in the same way by various authors. Specific definitions are proposed; they suggest that secular equilibrium is a subset of transient equilibrium

  2. Feeder Type Optimisation for the Plain Flow Discharge Process of an Underground Hopper by Discrete Element Modelling

    Directory of Open Access Journals (Sweden)

    Jan Nečas

    2017-09-01

    Full Text Available This paper describes optimisation of a conveyor from an underground hopper intended for a coal transfer station. The original solution was designed with a chain conveyor encountered operational problems that have limited its continuous operation. The Discrete Element Modeling (DEM was chosen to optimise the transport. DEM simulations allow device design modifications directly in the 3D CAD model, and then the simulation makes it possible to evaluate whether the adjustment was successful. By simulating the initial state of coal extraction using a chain conveyor, trouble spots were identified that caused operational failures. The main problem has been the increased resistance during removal of material from the underground hopper. Revealed resistances against material movement were not considered in the original design at all. In the next step, structural modifications of problematic nodes were made. For example, the following changes have been made: reduction of storage space or installation of passive elements into the interior of the underground hopper. These modifications made were not effective enough, so the type of the conveyor was changed from a drag chain conveyor to a belt conveyor. The simulation of the material extraction using a belt conveyor showed a significant reduction in resistance parameters while maintaining the required transport performance.

  3. Computer program to solve two-dimensional shock-wave interference problems with an equilibrium chemically reacting air model

    Science.gov (United States)

    Glass, Christopher E.

    1990-08-01

    The computer program EASI, an acronym for Equilibrium Air Shock Interference, was developed to calculate the inviscid flowfield, the maximum surface pressure, and the maximum heat flux produced by six shock wave interference patterns on a 2-D, cylindrical configuration. Thermodynamic properties of the inviscid flowfield are determined using either an 11-specie, 7-reaction equilibrium chemically reacting air model or a calorically perfect air model. The inviscid flowfield is solved using the integral form of the conservation equations. Surface heating calculations at the impingement point for the equilibrium chemically reacting air model use variable transport properties and specific heat. However, for the calorically perfect air model, heating rate calculations use a constant Prandtl number. Sample calculations of the six shock wave interference patterns, a listing of the computer program, and flowcharts of the programming logic are included.

  4. A reaction-based paradigm to model reactive chemical transport in groundwater with general kinetic and equilibrium reactions

    International Nuclear Information System (INIS)

    Zhang, Fan; Yeh, Gour-Tsyh; Parker, Jack C.; Brooks, Scott C; Pace, Molly; Kim, Young Jin; Jardine, Philip M.; Watson, David B.

    2007-01-01

    This paper presents a reaction-based water quality transport model in subsurface flow systems. Transport of chemical species with a variety of chemical and physical processes is mathematically described by M. partial differential equations (PDEs). Decomposition via Gauss-Jordan column reduction of the reaction network transforms M. species reactive transport equations into two sets of equations: a set of thermodynamic equilibrium equations representing NE equilibrium reactions and a set of reactive transport equations of M-NE kinetic-variables involving no equilibrium reactions (a kinetic-variable is a linear combination of species). The elimination of equilibrium reactions from reactive transport equations allows robust and efficient numerical integration. The model solves the PDEs of kinetic-variables rather than individual chemical species, which reduces the number of reactive transport equations and simplifies the reaction terms in the equations. A variety of numerical methods are investigated for solving the coupled transport and reaction equations. Simulation comparisons with exact solutions were performed to verify numerical accuracy and assess the effectiveness of various numerical strategies to deal with different application circumstances. Two validation examples involving simulations of uranium transport in soil columns are presented to evaluate the ability of the model to simulate reactive transport with complex reaction networks involving both kinetic and equilibrium reactions

  5. A reaction-based paradigm to model reactive chemical transport in groundwater with general kinetic and equilibrium reactions.

    Science.gov (United States)

    Zhang, Fan; Yeh, Gour-Tsyh; Parker, Jack C; Brooks, Scott C; Pace, Molly N; Kim, Young-Jin; Jardine, Philip M; Watson, David B

    2007-06-16

    This paper presents a reaction-based water quality transport model in subsurface flow systems. Transport of chemical species with a variety of chemical and physical processes is mathematically described by M partial differential equations (PDEs). Decomposition via Gauss-Jordan column reduction of the reaction network transforms M species reactive transport equations into two sets of equations: a set of thermodynamic equilibrium equations representing N(E) equilibrium reactions and a set of reactive transport equations of M-N(E) kinetic-variables involving no equilibrium reactions (a kinetic-variable is a linear combination of species). The elimination of equilibrium reactions from reactive transport equations allows robust and efficient numerical integration. The model solves the PDEs of kinetic-variables rather than individual chemical species, which reduces the number of reactive transport equations and simplifies the reaction terms in the equations. A variety of numerical methods are investigated for solving the coupled transport and reaction equations. Simulation comparisons with exact solutions were performed to verify numerical accuracy and assess the effectiveness of various numerical strategies to deal with different application circumstances. Two validation examples involving simulations of uranium transport in soil columns are presented to evaluate the ability of the model to simulate reactive transport with complex reaction networks involving both kinetic and equilibrium reactions.

  6. Geometric Generalisation of Surrogate Model-Based Optimisation to Combinatorial and Program Spaces

    Directory of Open Access Journals (Sweden)

    Yong-Hyuk Kim

    2014-01-01

    Full Text Available Surrogate models (SMs can profitably be employed, often in conjunction with evolutionary algorithms, in optimisation in which it is expensive to test candidate solutions. The spatial intuition behind SMs makes them naturally suited to continuous problems, and the only combinatorial problems that have been previously addressed are those with solutions that can be encoded as integer vectors. We show how radial basis functions can provide a generalised SM for combinatorial problems which have a geometric solution representation, through the conversion of that representation to a different metric space. This approach allows an SM to be cast in a natural way for the problem at hand, without ad hoc adaptation to a specific representation. We test this adaptation process on problems involving binary strings, permutations, and tree-based genetic programs.

  7. Equilibrium shoreface profiles

    DEFF Research Database (Denmark)

    Aagaard, Troels; Hughes, Michael G

    2017-01-01

    Large-scale coastal behaviour models use the shoreface profile of equilibrium as a fundamental morphological unit that is translated in space to simulate coastal response to, for example, sea level oscillations and variability in sediment supply. Despite a longstanding focus on the shoreface...... profile and its relevance to predicting coastal response to changing environmental conditions, the processes and dynamics involved in shoreface equilibrium are still not fully understood. Here, we apply a process-based empirical sediment transport model, combined with morphodynamic principles to provide......; there is no tuning or calibration and computation times are short. It is therefore easily implemented with repeated iterations to manage uncertainty....

  8. Optimisation of logistics processes of energy grass collection

    Science.gov (United States)

    Bányai, Tamás.

    2010-05-01

    The collection of energy grass is a logistics-intensive process [1]. The optimal design and control of transportation and collection subprocesses is a critical point of the supply chain. To avoid irresponsible decisions by right of experience and intuition, the optimisation and analysis of collection processes based on mathematical models and methods is the scientific suggestible way. Within the frame of this work, the author focuses on the optimisation possibilities of the collection processes, especially from the point of view transportation and related warehousing operations. However the developed optimisation methods in the literature [2] take into account the harvesting processes, county-specific yields, transportation distances, erosion constraints, machinery specifications, and other key variables, but the possibility of more collection points and the multi-level collection were not taken into consideration. The possible areas of using energy grass is very wide (energetically use, biogas and bio alcohol production, paper and textile industry, industrial fibre material, foddering purposes, biological soil protection [3], etc.), so not only a single level but also a multi-level collection system with more collection and production facilities has to be taken into consideration. The input parameters of the optimisation problem are the followings: total amount of energy grass to be harvested in each region; specific facility costs of collection, warehousing and production units; specific costs of transportation resources; pre-scheduling of harvesting process; specific transportation and warehousing costs; pre-scheduling of processing of energy grass at each facility (exclusive warehousing). The model take into consideration the following assumptions: (1) cooperative relation among processing and production facilties, (2) capacity constraints are not ignored, (3) the cost function of transportation is non-linear, (4) the drivers conditions are ignored. The

  9. Multi-binding site model-based curve-fitting program for the computation of RIA data

    International Nuclear Information System (INIS)

    Malan, P.G.; Ekins, R.P.; Cox, M.G.; Long, E.M.R.

    1977-01-01

    In this paper, a comparison will be made of model-based and empirical curve-fitting procedures. The implementation of a multiple binding-site curve-fitting model which will successfully fit a wide range of assay data, and which can be run on a mini-computer is described. The latter sophisticated model also provides estimates of binding site concentrations and the values of the respective equilibrium constants present: the latter have been used for refining assay conditions using computer optimisation techniques. (orig./AJ) [de

  10. Development of a bi-equilibrium model for biomass gasification in a downdraft bed reactor.

    Science.gov (United States)

    Biagini, Enrico; Barontini, Federica; Tognotti, Leonardo

    2016-02-01

    This work proposes a simple and accurate tool for predicting the main parameters of biomass gasification (syngas composition, heating value, flow rate), suitable for process study and system analysis. A multizonal model based on non-stoichiometric equilibrium models and a repartition factor, simulating the bypass of pyrolysis products through the oxidant zone, was developed. The results of tests with different feedstocks (corn cobs, wood pellets, rice husks and vine pruning) in a demonstrative downdraft gasifier (350kW) were used for validation. The average discrepancy between model and experimental results was up to 8 times less than the one with the simple equilibrium model. The repartition factor was successfully related to the operating conditions and characteristics of the biomass to simulate different conditions of the gasifier (variation in potentiality, densification and mixing of feedstock) and analyze the model sensitivity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Using marketing theory to inform strategies for recruitment: a recruitment optimisation model and the txt2stop experience

    Science.gov (United States)

    2014-01-01

    Background Recruitment is a major challenge for many trials; just over half reach their targets and almost a third resort to grant extensions. The economic and societal implications of this shortcoming are significant. Yet, we have a limited understanding of the processes that increase the probability that recruitment targets will be achieved. Accordingly, there is an urgent need to bring analytical rigour to the task of improving recruitment, thereby increasing the likelihood that trials reach their recruitment targets. This paper presents a conceptual framework that can be used to improve recruitment to clinical trials. Methods Using a case-study approach, we reviewed the range of initiatives that had been undertaken to improve recruitment in the txt2stop trial using qualitative (semi-structured interviews with the principal investigator) and quantitative (recruitment) data analysis. Later, the txt2stop recruitment practices were compared to a previous model of marketing a trial and to key constructs in social marketing theory. Results Post hoc, we developed a recruitment optimisation model to serve as a conceptual framework to improve recruitment to clinical trials. A core premise of the model is that improving recruitment needs to be an iterative, learning process. The model describes three essential activities: i) recruitment phase monitoring, ii) marketing research, and iii) the evaluation of current performance. We describe the initiatives undertaken by the txt2stop trial and the results achieved, as an example of the use of the model. Conclusions Further research should explore the impact of adopting the recruitment optimisation model when applied to other trials. PMID:24886627

  12. Using marketing theory to inform strategies for recruitment: a recruitment optimisation model and the txt2stop experience.

    Science.gov (United States)

    Galli, Leandro; Knight, Rosemary; Robertson, Steven; Hoile, Elizabeth; Oladapo, Olubukola; Francis, David; Free, Caroline

    2014-05-22

    Recruitment is a major challenge for many trials; just over half reach their targets and almost a third resort to grant extensions. The economic and societal implications of this shortcoming are significant. Yet, we have a limited understanding of the processes that increase the probability that recruitment targets will be achieved. Accordingly, there is an urgent need to bring analytical rigour to the task of improving recruitment, thereby increasing the likelihood that trials reach their recruitment targets. This paper presents a conceptual framework that can be used to improve recruitment to clinical trials. Using a case-study approach, we reviewed the range of initiatives that had been undertaken to improve recruitment in the txt2stop trial using qualitative (semi-structured interviews with the principal investigator) and quantitative (recruitment) data analysis. Later, the txt2stop recruitment practices were compared to a previous model of marketing a trial and to key constructs in social marketing theory. Post hoc, we developed a recruitment optimisation model to serve as a conceptual framework to improve recruitment to clinical trials. A core premise of the model is that improving recruitment needs to be an iterative, learning process. The model describes three essential activities: i) recruitment phase monitoring, ii) marketing research, and iii) the evaluation of current performance. We describe the initiatives undertaken by the txt2stop trial and the results achieved, as an example of the use of the model. Further research should explore the impact of adopting the recruitment optimisation model when applied to other trials.

  13. Energy taxes and wages in a general equilibrium model of production

    International Nuclear Information System (INIS)

    Thompson, H.

    2000-01-01

    Energy taxes are responsible for a good deal of observed differences in energy prices across states and countries. They alter patterns of production and income distribution. The present paper examines the potential of energy taxes to lower wages in a general equilibrium model of production with capital, labour and energy inputs. (Author)

  14. Transfer coefficients to terrestrial food products in equilibrium assessment models for nuclear installations

    International Nuclear Information System (INIS)

    Zach, R.

    1980-09-01

    Transfer coefficients have become virtually indispensible in the study of the fate of radioisotopes released from nuclear installations. These coefficients are used in equilibrium assessment models where they specify the degree of transfer in food chains of individual radioisotopes from soil to plant products and from feed or forage and drinking water to animal products and ultimately to man. Information on transfer coefficients for terrestrial food chain models is very piecemeal and occurs in a wide variety of journals and reports. To enable us to choose or determine suitable values for assessments, we have addressed the following aspects of transfer coefficients on a very broad scale: (1) definitions, (2) equilibrium assumption, which stipulates that transfer coefficients be restricted to equilibrium or steady rate conditions, (3) assumption of linearity, that is the idea that radioisotope concentrations in food products increase linearly with contamination levels in the soil or animal feed, (4) methods of determination, (5) variability, (6) generic versus site-specific values, (7) statistical aspects, (8) use, (9) sources of currently used values, (10) criteria for revising values, (11) establishment and maintenance of files on transfer coefficients, and (12) future developments. (auth)

  15. Non equilibrium atomic processes and plasma spectroscopy

    International Nuclear Information System (INIS)

    Kato, Takako

    2003-01-01

    Along with the technical progress in plasma spectroscopy, non equilibrium ionization processes have been recently observed. We study non local thermodynamic equilibrium and non ionization equilibrium for various kinds of plasmas. Specifically we discuss non equilibrium atomic processes in magnetically confined plasmas, solar flares and laser produced plasmas using a collisional radiative model based on plasma spectroscopic data. (author)

  16. Optimisation of decision making under uncertainty throughout field lifetime: A fractured reservoir example

    Science.gov (United States)

    Arnold, Dan; Demyanov, Vasily; Christie, Mike; Bakay, Alexander; Gopa, Konstantin

    2016-10-01

    Assessing the change in uncertainty in reservoir production forecasts over field lifetime is rarely undertaken because of the complexity of joining together the individual workflows. This becomes particularly important in complex fields such as naturally fractured reservoirs. The impact of this problem has been identified in previous and many solutions have been proposed but never implemented on complex reservoir problems due to the computational cost of quantifying uncertainty and optimising the reservoir development, specifically knowing how many and what kind of simulations to run. This paper demonstrates a workflow that propagates uncertainty throughout field lifetime, and into the decision making process by a combination of a metric-based approach, multi-objective optimisation and Bayesian estimation of uncertainty. The workflow propagates uncertainty estimates from appraisal into initial development optimisation, then updates uncertainty through history matching and finally propagates it into late-life optimisation. The combination of techniques applied, namely the metric approach and multi-objective optimisation, help evaluate development options under uncertainty. This was achieved with a significantly reduced number of flow simulations, such that the combined workflow is computationally feasible to run for a real-field problem. This workflow is applied to two synthetic naturally fractured reservoir (NFR) case studies in appraisal, field development, history matching and mid-life EOR stages. The first is a simple sector model, while the second is a more complex full field example based on a real life analogue. This study infers geological uncertainty from an ensemble of models that are based on the carbonate Brazilian outcrop which are propagated through the field lifetime, before and after the start of production, with the inclusion of production data significantly collapsing the spread of P10-P90 in reservoir forecasts. The workflow links uncertainty

  17. A novel multiphysic model for simulation of swelling equilibrium of ionized thermal-stimulus responsive hydrogels

    Science.gov (United States)

    Li, Hua; Wang, Xiaogui; Yan, Guoping; Lam, K. Y.; Cheng, Sixue; Zou, Tao; Zhuo, Renxi

    2005-03-01

    In this paper, a novel multiphysic mathematical model is developed for simulation of swelling equilibrium of ionized temperature sensitive hydrogels with the volume phase transition, and it is termed the multi-effect-coupling thermal-stimulus (MECtherm) model. This model consists of the steady-state Nernst-Planck equation, Poisson equation and swelling equilibrium governing equation based on the Flory's mean field theory, in which two types of polymer-solvent interaction parameters, as the functions of temperature and polymer-network volume fraction, are specified with or without consideration of the hydrogen bond interaction. In order to examine the MECtherm model consisting of nonlinear partial differential equations, a meshless Hermite-Cloud method is used for numerical solution of one-dimensional swelling equilibrium of thermal-stimulus responsive hydrogels immersed in a bathing solution. The computed results are in very good agreements with experimental data for the variation of volume swelling ratio with temperature. The influences of the salt concentration and initial fixed-charge density are discussed in detail on the variations of volume swelling ratio of hydrogels, mobile ion concentrations and electric potential of both interior hydrogels and exterior bathing solution.

  18. Clarifications to the limitations of the s-α equilibrium model for gyrokinetic computations of turbulence

    International Nuclear Information System (INIS)

    Lapillonne, X.; Brunner, S.; Dannert, T.; Jolliet, S.; Marinoni, A.; Villard, L.; Goerler, T.; Jenko, F.; Merz, F.

    2009-01-01

    In the context of gyrokinetic flux-tube simulations of microturbulence in magnetized toroidal plasmas, different treatments of the magnetic equilibrium are examined. Considering the Cyclone DIII-D base case parameter set [Dimits et al., Phys. Plasmas 7, 969 (2000)], significant differences in the linear growth rates, the linear and nonlinear critical temperature gradients, and the nonlinear ion heat diffusivities are observed between results obtained using either an s-α or a magnetohydrodynamic (MHD) equilibrium. Similar disagreements have been reported previously [Redd et al., Phys. Plasmas 6, 1162 (1999)]. In this paper it is shown that these differences result primarily from the approximation made in the standard implementation of the s-α model, in which the straight field line angle is identified to the poloidal angle, leading to inconsistencies of order ε (ε=a/R is the inverse aspect ratio, a the minor radius and R the major radius). An equilibrium model with concentric, circular flux surfaces and a correct treatment of the straight field line angle gives results very close to those using a finite ε, low β MHD equilibrium. Such detailed investigation of the equilibrium implementation is of particular interest when comparing flux tube and global codes. It is indeed shown here that previously reported agreements between local and global simulations in fact result from the order ε inconsistencies in the s-α model, coincidentally compensating finite ρ * effects in the global calculations, where ρ * =ρ s /a with ρ s the ion sound Larmor radius. True convergence between local and global simulations is finally obtained by correct treatment of the geometry in both cases, and considering the appropriate ρ * →0 limit in the latter case.

  19. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yen, E-mail: yen.liu@nasa.gov; Vinokur, Marcel [NASA Ames Research Center, Moffett Field, California 94035 (United States); Panesi, Marco; Sahai, Amal [University of Illinois, Urbana-Champaign, Illinois 61801 (United States)

    2015-04-07

    This paper opens a new door to macroscopic modeling for thermal and chemical non-equilibrium. In a game-changing approach, we discard conventional theories and practices stemming from the separation of internal energy modes and the Landau-Teller relaxation equation. Instead, we solve the fundamental microscopic equations in their moment forms but seek only optimum representations for the microscopic state distribution function that provides converged and time accurate solutions for certain macroscopic quantities at all times. The modeling makes no ad hoc assumptions or simplifications at the microscopic level and includes all possible collisional and radiative processes; it therefore retains all non-equilibrium fluid physics. We formulate the thermal and chemical non-equilibrium macroscopic equations and rate coefficients in a coupled and unified fashion for gases undergoing completely general transitions. All collisional partners can have internal structures and can change their internal energy states after transitions. The model is based on the reconstruction of the state distribution function. The internal energy space is subdivided into multiple groups in order to better describe non-equilibrium state distributions. The logarithm of the distribution function in each group is expressed as a power series in internal energy based on the maximum entropy principle. The method of weighted residuals is applied to the microscopic equations to obtain macroscopic moment equations and rate coefficients succinctly to any order. The model’s accuracy depends only on the assumed expression of the state distribution function and the number of groups used and can be self-checked for accuracy and convergence. We show that the macroscopic internal energy transfer, similar to mass and momentum transfers, occurs through nonlinear collisional processes and is not a simple relaxation process described by, e.g., the Landau-Teller equation. Unlike the classical vibrational energy

  20. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    International Nuclear Information System (INIS)

    Liu, Yen; Vinokur, Marcel; Panesi, Marco; Sahai, Amal

    2015-01-01

    This paper opens a new door to macroscopic modeling for thermal and chemical non-equilibrium. In a game-changing approach, we discard conventional theories and practices stemming from the separation of internal energy modes and the Landau-Teller relaxation equation. Instead, we solve the fundamental microscopic equations in their moment forms but seek only optimum representations for the microscopic state distribution function that provides converged and time accurate solutions for certain macroscopic quantities at all times. The modeling makes no ad hoc assumptions or simplifications at the microscopic level and includes all possible collisional and radiative processes; it therefore retains all non-equilibrium fluid physics. We formulate the thermal and chemical non-equilibrium macroscopic equations and rate coefficients in a coupled and unified fashion for gases undergoing completely general transitions. All collisional partners can have internal structures and can change their internal energy states after transitions. The model is based on the reconstruction of the state distribution function. The internal energy space is subdivided into multiple groups in order to better describe non-equilibrium state distributions. The logarithm of the distribution function in each group is expressed as a power series in internal energy based on the maximum entropy principle. The method of weighted residuals is applied to the microscopic equations to obtain macroscopic moment equations and rate coefficients succinctly to any order. The model’s accuracy depends only on the assumed expression of the state distribution function and the number of groups used and can be self-checked for accuracy and convergence. We show that the macroscopic internal energy transfer, similar to mass and momentum transfers, occurs through nonlinear collisional processes and is not a simple relaxation process described by, e.g., the Landau-Teller equation. Unlike the classical vibrational energy

  1. Can elliptical galaxies be equilibrium systems

    Energy Technology Data Exchange (ETDEWEB)

    Caimmi, R [Padua Univ. (Italy). Ist. di Astronomia

    1980-08-01

    This paper deals with the question of whether elliptical galaxies can be considered as equilibrium systems (i.e., the gravitational + centrifugal potential is constant on the external surface). We find that equilibrium models such as Emden-Chandrasekhar polytropes and Roche polytropes with n = 0 can account for the main part of observations relative to the ratio of maximum rotational velocity to central velocity dispersion in elliptical systems. More complex models involving, for example, massive halos could lead to a more complete agreement. Models that are a good fit to the observed data are characterized by an inner component (where most of the mass is concentrated) and a low-density outer component. A comparison is performed between some theoretical density distributions and the density distribution observed by Young et al. (1978) in NGC 4473, but a number of limitations must be adopted. Alternative models, such as triaxial oblate non-equilibrium configurations with coaxial shells, involve a number of problems which are briefly discussed. We conclude that spheroidal oblate models describing elliptical galaxies cannot be ruled out until new analyses relative to more refined theoretical equilibrium models (involving, for example, massive halos) and more detailed observations are performed.

  2. Results of the 2010 IGSC Topical Session on Optimisation

    International Nuclear Information System (INIS)

    Bailey, Lucy

    2014-01-01

    Document available in abstract form only. Full text follows: The 2010 IGSC topical session on optimisation explored a wide range of issues concerning optimisation throughout the radioactive waste management process. Philosophical and ethical questions were discussed, such as: - To what extent is the process of optimisation more important than the end result? - How do we balance long-term environmental safety with near-term operational safety? - For how long should options be kept open? - In balancing safety and excessive cost, when is BAT achieved and who decides on this? * How should we balance the needs of current society with those of future generations? It was clear that optimisation is about getting the right balance between a range of issues that cover: radiation protection, environmental protection, operational safety, operational requirements, social expectations and cost. The optimisation process will also need to respect various constraints, which are likely to include: regulatory requirements, site restrictions, community-imposed requirements or restrictions and resource constraints. These issues were explored through a number of presentations that discussed practical cases of optimisation occurring at different stages of international radioactive waste management programmes. These covered: - Operations and decommissioning - management of large disused components, from the findings of an international study, presented by WPDD; - Concept option selection, prior to site selection - upstream and disposal system optioneering in the UK; - Siting decisions - examples from both Germany and France, explaining how optimisation is being used to support site comparisons and communicate siting decisions; - Repository design decisions - comparison of KBS-3 horizontal and vertical deposition options in Finland; and - On-going optimisation during repository operation - operational experience from WIPP in the US. The variety of the remarks and views expressed during the

  3. The Equilibrium Analysis of a Closed Economy Model with Government and Money Market Sector

    Directory of Open Access Journals (Sweden)

    Catalin Angelo Ioan

    2011-10-01

    Full Text Available In this paper, we first study the static equilibrium of a a closed economy model in terms of dependence on national income and interest rate from the main factors namely the marginal propensity to consume, tax rate, investment rate and the rate of currency demand. In the second part, we study the dynamic equilibrium solutions in terms of stability. We thus obtain the variation functions of national income and interest rate variation and their limit values.

  4. Optimisation on processing parameters for minimising warpage on side arm using response surface methodology (RSM) and particle swarm optimisation (PSO)

    Science.gov (United States)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.

    2017-09-01

    This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.

  5. Non-equilibrium phase transitions

    CERN Document Server

    Henkel, Malte; Lübeck, Sven

    2009-01-01

    This book describes two main classes of non-equilibrium phase-transitions: (a) static and dynamics of transitions into an absorbing state, and (b) dynamical scaling in far-from-equilibrium relaxation behaviour and ageing. The first volume begins with an introductory chapter which recalls the main concepts of phase-transitions, set for the convenience of the reader in an equilibrium context. The extension to non-equilibrium systems is made by using directed percolation as the main paradigm of absorbing phase transitions and in view of the richness of the known results an entire chapter is devoted to it, including a discussion of recent experimental results. Scaling theories and a large set of both numerical and analytical methods for the study of non-equilibrium phase transitions are thoroughly discussed. The techniques used for directed percolation are then extended to other universality classes and many important results on model parameters are provided for easy reference.

  6. Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology

    Science.gov (United States)

    Kumar, Amit; Soota, Tarun; Kumar, Jitendra

    2018-03-01

    Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.

  7. An equilibrium pricing model for weather derivatives in a multi-commodity setting

    International Nuclear Information System (INIS)

    Lee, Yongheon; Oren, Shmuel S.

    2009-01-01

    Many industries are exposed to weather risk. Weather derivatives can play a key role in hedging and diversifying such risk because the uncertainty in a company's profit function can be correlated to weather condition which affects diverse industry sectors differently. Unfortunately the weather derivatives market is a classical example of an incomplete market that is not amenable to standard methodologies used for derivative pricing in complete markets. In this paper, we develop an equilibrium pricing model for weather derivatives in a multi-commodity setting. The model is constructed in the context of a stylized economy where agents optimize their hedging portfolios which include weather derivatives that are issued in a fixed quantity by a financial underwriter. The supply and demand resulting from hedging activities and the supply by the underwriter are combined in an equilibrium pricing model under the assumption that all agents maximize some risk averse utility function. We analyze the gains due to the inclusion of weather derivatives in hedging portfolios and examine the components of that gain attributable to hedging and to risk sharing. (author)

  8. State-to-state modeling of non-equilibrium air nozzle flows

    Science.gov (United States)

    Nagnibeda, E.; Papina, K.; Kunova, O.

    2018-05-01

    One-dimensional non-equilibrium air flows in nozzles are studied on the basis of the state-to-state description of vibrational-chemical kinetics. Five-component mixture N2/O2/NO/N/O is considered taking into account Zeldovich exchange reactions of NO formation, dissociation, recombination and vibrational energy transitions. The equations for vibrational and chem-ical kinetics in a flow are coupled to the conservation equations of momentum and total energy and solved numerically for different conditions in a nozzle throat. The vibrational distributions of nitrogen and oxygen molecules, number densities of species as well as the gas temperature and flow velocity along a nozzle axis are analysed using the detailed state-to-state flow description and in the frame of the simplified one-temperature thermal equilibrium kinetic model. The comparison of the results showed the influence of non-equilibrium kinetics on macroscopic nozzle flow parameters. In the state-to-state approach, non-Boltzmann vibrational dis-tributions of N2 and O2 molecules with a plateau part at intermediate levels are found. The results are found with the use of the complete and simplified schemes of reactions and the impact of exchange reactions, dissociation and recombination on variation of vibrational level populations, mixture composition, gas velocity and temperature along a nozzle axis is shown.

  9. Optimising the introduction of multiple childhood vaccines in Japan: A model proposing the introduction sequence achieving the highest health gains.

    Science.gov (United States)

    Standaert, Baudouin; Schecroun, Nadia; Ethgen, Olivier; Topachevskyi, Oleksandr; Morioka, Yoriko; Van Vlaenderen, Ilse

    2017-12-01

    Many countries struggle with the prioritisation of introducing new vaccines because of budget limitations and lack of focus on public health goals. A model has been developed that defines how specific health goals can be optimised through immunisation within vaccination budget constraints. Japan, as a country example, could introduce 4 new pediatric vaccines targeting influenza, rotavirus, pneumococcal disease and mumps with known burden of disease, vaccine efficacies and maximum achievable coverages. Operating under budget constraints, the Portfolio-model for the Management of Vaccines (PMV) identifies the optimal vaccine ranking and combination for achieving the maximum QALY gain over a period of 10 calendar years in children optimal sequence of vaccine introduction (mumps [1st], followed by influenza [2nd], rotavirus [3rd], and pneumococcal [4th]). With exactly the same budget but without vaccine ranking, the total QALY gain can be 20% lower. The PMV model could be a helpful tool for decision makers in those environments with limited budget where vaccines have to be selected for trying to optimise specific health goals. Copyright © 2017 GlaxoSmithKline Biologicals SA. Published by Elsevier B.V. All rights reserved.

  10. Microscopic Simulation and Macroscopic Modeling for Thermal and Chemical Non-Equilibrium

    Science.gov (United States)

    Liu, Yen; Panesi, Marco; Vinokur, Marcel; Clarke, Peter

    2013-01-01

    This paper deals with the accurate microscopic simulation and macroscopic modeling of extreme non-equilibrium phenomena, such as encountered during hypersonic entry into a planetary atmosphere. The state-to-state microscopic equations involving internal excitation, de-excitation, dissociation, and recombination of nitrogen molecules due to collisions with nitrogen atoms are solved time-accurately. Strategies to increase the numerical efficiency are discussed. The problem is then modeled using a few macroscopic variables. The model is based on reconstructions of the state distribution function using the maximum entropy principle. The internal energy space is subdivided into multiple groups in order to better describe the non-equilibrium gases. The method of weighted residuals is applied to the microscopic equations to obtain macroscopic moment equations and rate coefficients. The modeling is completely physics-based, and its accuracy depends only on the assumed expression of the state distribution function and the number of groups used. The model makes no assumption at the microscopic level, and all possible collisional and radiative processes are allowed. The model is applicable to both atoms and molecules and their ions. Several limiting cases are presented to show that the model recovers the classical twotemperature models if all states are in one group and the model reduces to the microscopic equations if each group contains only one state. Numerical examples and model validations are carried out for both the uniform and linear distributions. Results show that the original over nine thousand microscopic equations can be reduced to 2 macroscopic equations using 1 to 5 groups with excellent agreement. The computer time is decreased from 18 hours to less than 1 second.

  11. Non-equilibrium supramolecular polymerization.

    Science.gov (United States)

    Sorrenti, Alessandro; Leira-Iglesias, Jorge; Markvoort, Albert J; de Greef, Tom F A; Hermans, Thomas M

    2017-09-18

    Supramolecular polymerization has been traditionally focused on the thermodynamic equilibrium state, where one-dimensional assemblies reside at the global minimum of the Gibbs free energy. The pathway and rate to reach the equilibrium state are irrelevant, and the resulting assemblies remain unchanged over time. In the past decade, the focus has shifted to kinetically trapped (non-dissipative non-equilibrium) structures that heavily depend on the method of preparation (i.e., pathway complexity), and where the assembly rates are of key importance. Kinetic models have greatly improved our understanding of competing pathways, and shown how to steer supramolecular polymerization in the desired direction (i.e., pathway selection). The most recent innovation in the field relies on energy or mass input that is dissipated to keep the system away from the thermodynamic equilibrium (or from other non-dissipative states). This tutorial review aims to provide the reader with a set of tools to identify different types of self-assembled states that have been explored so far. In particular, we aim to clarify the often unclear use of the term "non-equilibrium self-assembly" by subdividing systems into dissipative, and non-dissipative non-equilibrium states. Examples are given for each of the states, with a focus on non-dissipative non-equilibrium states found in one-dimensional supramolecular polymerization.

  12. First principles modeling of hydrocarbons conversion in non-equilibrium plasma

    Energy Technology Data Exchange (ETDEWEB)

    Deminsky, M.A.; Strelkova, M.I.; Durov, S.G.; Jivotov, V.K.; Rusanov, V.D.; Potapkin, B.V. [Russian Research Centre Kurchatov Inst., Moscow (Russian Federation)

    2001-07-01

    Theoretical justification of catalytic activity of non-equilibrium plasma in hydrocarbons conversion process is presented in this paper. The detailed model of highest hydrocarbons conversion includes the gas-phase reactions, chemistry of the growth of polycyclic aromatic hydrocarbons (PAHs), precursor of soot particles formation, neutral, charged clusters and soot particle formation, ion-molecular gas-phase and heterogeneous chemistry. The results of theoretical analysis are compared with experimental results. (authors)

  13. Thermodynamics of binary mixtures of N-methyl-2-pyrrolidinone and ketone. Experimental results and modelling of the (solid + liquid) equilibrium and the (vapour + liquid) equilibrium. The modified UNIFAC (Do) model characterization

    International Nuclear Information System (INIS)

    Domanska, Urszula; Lachwa, Joanna

    2005-01-01

    The (solid + liquid) equilibrium (SLE) of eight binary systems containing N-methyl-2-pyrrolidinone (NMP) with (2-propanone, or 2-butanone, or 2-pentanone, or 3-pentanone, or cyclopentanone, or 2-hexanone, or 4-methyl-2-pentanone, or 3-heptanone) were carried out by using a dynamic method from T = 200 K to the melting point of the NMP. The isothermal (vapour + liquid) equilibrium data (VLE) have been measured for three binary mixtures of NMP with 2-propanone, 3-pentanone and 2-hexanone at pressure range from p = 0 kPa to p = 115 kPa. Data were obtained at the temperature T = 333.15 K for the first system and at T = 373.15 K for the second two systems. The experimental results of SLE have been correlated using the binary parameters Wilson, UNIQUAC ASM and two modified NRTL equations. The root-mean-square deviations of the solubility temperatures for all the calculated values vary from (0.32 K to 0.68 K) and depend on the particular equation used. The data of VLE were correlated with one to three parameters in the Redlich-Kister expansion. Binary mixtures of NMP with (2-propanone, or 2-butanone, or 2-pentanone, or 3-pentanone, or cyclopentanone, or 2-hexanone, or 4-methyl-2-pentanone, or 3-heptanone) have been investigated in the framework of the modified UNIFAC (Do) model. The reported new interaction parameters for NMP-group (c-CONCH 3 ) and carbonyl group ( C=O) let the model consistently described a set of thermodynamic properties, including (solid + liquid) equilibrium (vapour + liquid) equilibrium, excess Gibbs energy and molar excess enthalpies of mixing. Our experimental and literature data of binary mixtures containing NMP and ketones were compared with the results of prediction with the modified UNIFAC (Do) model

  14. (MBO) algorithm in multi-reservoir system optimisation

    African Journals Online (AJOL)

    A comparative study of marriage in honey bees optimisation (MBO) algorithm in ... A practical application of the marriage in honey bees optimisation (MBO) ... to those of other evolutionary algorithms, such as the genetic algorithm (GA), ant ...

  15. Mechatronic System Design Based On An Optimisation Approach

    DEFF Research Database (Denmark)

    Andersen, Torben Ole; Pedersen, Henrik Clemmensen; Hansen, Michael Rygaard

    The envisaged objective of this paper project is to extend the current state of the art regarding the design of complex mechatronic systems utilizing an optimisation approach. We propose to investigate a novel framework for mechatronic system design. The novelty and originality being the use...... of optimisation techniques. The methods used to optimise/design within the classical disciplines will be identified and extended to mechatronic system design....

  16. Evaluating equilibrium and non-equilibrium transport of bromide and isoproturon in disturbed and undisturbed soil columns

    Science.gov (United States)

    Dousset, S.; Thevenot, M.; Pot, V.; Šimunek, J.; Andreux, F.

    2007-12-01

    In this study, displacement experiments of isoproturon were conducted in disturbed and undisturbed columns of a silty clay loam soil under similar rainfall intensities. Solute transport occurred under saturated conditions in the undisturbed soil and under unsaturated conditions in the sieved soil because of a greater bulk density of the compacted undisturbed soil compared to the sieved soil. The objective of this work was to determine transport characteristics of isoproturon relative to bromide tracer. Triplicate column experiments were performed with sieved (structure partially destroyed to simulate conventional tillage) and undisturbed (structure preserved) soils. Bromide experimental breakthrough curves were analyzed using convective-dispersive and dual-permeability (DP) models (HYDRUS-1D). Isoproturon breakthrough curves (BTCs) were analyzed using the DP model that considered either chemical equilibrium or non-equilibrium transport. The DP model described the bromide elution curves of the sieved soil columns well, whereas it overestimated the tailing of the bromide BTCs of the undisturbed soil columns. A higher degree of physical non-equilibrium was found in the undisturbed soil, where 56% of total water was contained in the slow-flow matrix, compared to 26% in the sieved soil. Isoproturon BTCs were best described in both sieved and undisturbed soil columns using the DP model combined with the chemical non-equilibrium. Higher degradation rates were obtained in the transport experiments than in batch studies, for both soils. This was likely caused by hysteresis in sorption of isoproturon. However, it cannot be ruled out that higher degradation rates were due, at least in part, to the adopted first-order model. Results showed that for similar rainfall intensity, physical and chemical non-equilibrium were greater in the saturated undisturbed soil than in the unsaturated sieved soil. Results also suggested faster transport of isoproturon in the undisturbed soil due

  17. Optimisation of Investment Resources at Small Enterprises

    Directory of Open Access Journals (Sweden)

    Shvets Iryna B.

    2014-03-01

    Full Text Available The goal of the article lies in the study of the process of optimisation of the structure of investment resources, development of criteria and stages of optimisation of volumes of investment resources for small enterprises by types of economic activity. The article characterises the process of transformation of investment resources into assets and liabilities of the balances of small enterprises and conducts calculation of the structure of sources of formation of investment resources in Ukraine at small enterprises by types of economic activity in 2011. On the basis of the conducted analysis of the structure of investment resources of small enterprises the article forms main groups of criteria of optimisation in the context of individual small enterprises by types of economic activity. The article offers an algorithm and step-by-step scheme of optimisation of investment resources at small enterprises in the form of a multi-stage process of management of investment resources in the context of increase of their mobility and rate of transformation of existing resources into investments. The prospect of further studies in this direction is development of a structural and logic scheme of optimisation of volumes of investment resources at small enterprises.

  18. Equilibrium and kinetic models for colloid release under transient solution chemistry conditions.

    Science.gov (United States)

    Bradford, Scott A; Torkzaban, Saeed; Leij, Feike; Simunek, Jiri

    2015-10-01

    We present continuum models to describe colloid release in the subsurface during transient physicochemical conditions. Our modeling approach relates the amount of colloid release to changes in the fraction of the solid surface area that contributes to retention. Equilibrium, kinetic, equilibrium and kinetic, and two-site kinetic models were developed to describe various rates of colloid release. These models were subsequently applied to experimental colloid release datasets to investigate the influence of variations in ionic strength (IS), pH, cation exchange, colloid size, and water velocity on release. Various combinations of equilibrium and/or kinetic release models were needed to describe the experimental data depending on the transient conditions and colloid type. Release of Escherichia coli D21g was promoted by a decrease in solution IS and an increase in pH, similar to expected trends for a reduction in the secondary minimum and nanoscale chemical heterogeneity. The retention and release of 20nm carboxyl modified latex nanoparticles (NPs) were demonstrated to be more sensitive to the presence of Ca(2+) than D21g. Specifically, retention of NPs was greater than D21g in the presence of 2mM CaCl2 solution, and release of NPs only occurred after exchange of Ca(2+) by Na(+) and then a reduction in the solution IS. These findings highlight the limitations of conventional interaction energy calculations to describe colloid retention and release, and point to the need to consider other interactions (e.g., Born, steric, and/or hydration forces) and/or nanoscale heterogeneity. Temporal changes in the water velocity did not have a large influence on the release of D21g for the examined conditions. This insensitivity was likely due to factors that reduce the applied hydrodynamic torque and/or increase the resisting adhesive torque; e.g., macroscopic roughness and grain-grain contacts. Our analysis and models improve our understanding and ability to describe the amounts

  19. "Non-equilibrium" block copolymer micelles with glassy cores: a predictive approach based on theory of equilibrium micelles.

    Science.gov (United States)

    Nagarajan, Ramanathan

    2015-07-01

    Micelles generated in water from most amphiphilic block copolymers are widely recognized to be non-equilibrium structures. Typically, the micelles are prepared by a kinetic process, first allowing molecular scale dissolution of the block copolymer in a common solvent that likes both the blocks and then gradually replacing the common solvent by water to promote the hydrophobic blocks to aggregate and create the micelles. The non-equilibrium nature of the micelle originates from the fact that dynamic exchange between the block copolymer molecules in the micelle and the singly dispersed block copolymer molecules in water is suppressed, because of the glassy nature of the core forming polymer block and/or its very large hydrophobicity. Although most amphiphilic block copolymers generate such non-equilibrium micelles, no theoretical approach to a priori predict the micelle characteristics currently exists. In this work, we propose a predictive approach for non-equilibrium micelles with glassy cores by applying the equilibrium theory of micelles in two steps. In the first, we calculate the properties of micelles formed in the mixed solvent while true equilibrium prevails, until the micelle core becomes glassy. In the second step, we freeze the micelle aggregation number at this glassy state and calculate the corona dimension from the equilibrium theory of micelles. The condition when the micelle core becomes glassy is independently determined from a statistical thermodynamic treatment of diluent effect on polymer glass transition temperature. The predictions based on this "non-equilibrium" model compare reasonably well with experimental data for polystyrene-polyethylene oxide diblock copolymer, which is the most extensively studied system in the literature. In contrast, the application of the equilibrium model to describe such a system significantly overpredicts the micelle core and corona dimensions and the aggregation number. The non-equilibrium model suggests ways to

  20. Warpage optimisation on the moulded part with straight-drilled and conformal cooling channels using response surface methodology (RSM) and glowworm swarm optimisation (GSO)

    Science.gov (United States)

    Hazwan, M. H. M.; Shayfull, Z.; Sharif, S.; Nasir, S. M.; Zainal, N.

    2017-09-01

    In injection moulding process, quality and productivity are notably important and must be controlled for each product type produced. Quality is measured as the extent of warpage of moulded parts while productivity is measured as a duration of moulding cycle time. To control the quality, many researchers have introduced various of optimisation approaches which have been proven enhanced the quality of the moulded part produced. In order to improve the productivity of injection moulding process, some of researches have proposed the application of conformal cooling channels which have been proven reduced the duration of moulding cycle time. Therefore, this paper presents an application of alternative optimisation approach which is Response Surface Methodology (RSM) with Glowworm Swarm Optimisation (GSO) on the moulded part with straight-drilled and conformal cooling channels mould. This study examined the warpage condition of the moulded parts before and after optimisation work applied for both cooling channels. A front panel housing have been selected as a specimen and the performance of proposed optimisation approach have been analysed on the conventional straight-drilled cooling channels compared to the Milled Groove Square Shape (MGSS) conformal cooling channels by simulation analysis using Autodesk Moldflow Insight (AMI) 2013. Based on the results, melt temperature is the most significant factor contribute to the warpage condition and warpage have optimised by 39.1% after optimisation for straight-drilled cooling channels and cooling time is the most significant factor contribute to the warpage condition and warpage have optimised by 38.7% after optimisation for MGSS conformal cooling channels. In addition, the finding shows that the application of optimisation work on the conformal cooling channels offers the better quality and productivity of the moulded part produced.

  1. Financial equilibrium with career concerns

    Directory of Open Access Journals (Sweden)

    Amil Dasgupta

    2006-03-01

    Full Text Available What are the equilibrium features of a financial market where a sizeable proportion of traders face reputational concerns? This question is central to our understanding of financial markets, which are increasingly dominated by institutional investors. We construct a model of delegated portfolio management that captures key features of the US mutual fund industry and embed it in an asset pricing framework. We thus provide a formal model of financial equilibrium with career concerned agents. Fund managers differ in their ability to understand market fundamentals, and in every period investors choose a fund. In equilibrium, the presence of career concerns induces uninformed fund managers to churn, i.e., to engage in trading even when they face a negative expected return. Churners act as noise traders and enhance the level of trading volume. The equilibrium relationship between fund return and net fund flows displays a skewed shape that is consistent with stylized facts. The robustness of our core results is probed from several angles.

  2. The Use of VMD Data/Model to Test Different Thermodynamic Models for Vapour-Liquid Equilibrium

    DEFF Research Database (Denmark)

    Abildskov, Jens; Azquierdo-Gil, M.A.; Jonsson, Gunnar Eigil

    2004-01-01

    Vacuum membrane distillation (VMD) has been studied as a separation process to remove volatile organic compounds from aqueous streams. A vapour pressure difference across a microporous hydrophobic membrane is the driving force for the mass transport through the membrane pores (this transport take...... place in vapour phase). The vapour pressure difference is obtained in VMD processes by applying a vacuum on one side of the membrane. The membrane acts as a mere support for the liquid-vapour equilibrium. The evaporation of the liquid stream takes place on the feed side of the membrane...... values; membrane type: PTFE/PP/PVDF; feed flow rate; feed temperature. A comparison is made between different thermodynamic models for calculating the vapour-liquid equilibrium at the membrane/pore interface. (C) 2004 Elsevier B.V. All rights reserved....

  3. A new inorganic atmospheric aerosol phase equilibrium model (UHAERO

    Directory of Open Access Journals (Sweden)

    N. R. Amundson

    2006-01-01

    Full Text Available A variety of thermodynamic models have been developed to predict inorganic gas-aerosol equilibrium. To achieve computational efficiency a number of the models rely on a priori specification of the phases present in certain relative humidity regimes. Presented here is a new computational model, named UHAERO, that is both efficient and rigorously computes phase behavior without any a priori specification. The computational implementation is based on minimization of the Gibbs free energy using a primal-dual method, coupled to a Newton iteration. The mathematical details of the solution are given elsewhere. The model computes deliquescence behavior without any a priori specification of the relative humidities of deliquescence. Also included in the model is a formulation based on classical theory of nucleation kinetics that predicts crystallization behavior. Detailed phase diagrams of the sulfate/nitrate/ammonium/water system are presented as a function of relative humidity at 298.15 K over the complete space of composition.

  4. Electron-Impact Excitation Cross Sections for Modeling Non-Equilibrium Gas

    Science.gov (United States)

    Huo, Winifred M.; Liu, Yen; Panesi, Marco; Munafo, Alessandro; Wray, Alan; Carbon, Duane F.

    2015-01-01

    In order to provide a database for modeling hypersonic entry in a partially ionized gas under non-equilibrium, the electron-impact excitation cross sections of atoms have been calculated using perturbation theory. The energy levels covered in the calculation are retrieved from the level list in the HyperRad code. The downstream flow-field is determined by solving a set of continuity equations for each component. The individual structure of each energy level is included. These equations are then complemented by the Euler system of equations. Finally, the radiation field is modeled by solving the radiative transfer equation.

  5. Specification, Verification and Optimisation of Business Processes

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas

    is extended with stochastic branching, message passing and reward annotations which allow for the modelling of resources consumed during the execution of a business process. Further, it is shown how this structure can be used to formalise the established business process modelling language Business Process...... fault tree analysis and the automated optimisation of business processes by means of an evolutionary algorithm. This work is motivated by problems that stem from the healthcare sector, and examples encountered in this field are used to illustrate these developments....

  6. Comparison of a model vapor deposited glass films to equilibrium glass films

    Science.gov (United States)

    Flenner, Elijah; Berthier, Ludovic; Charbonneau, Patrick; Zamponi, Francesco

    Vapor deposition of particles onto a substrate held at around 85% of the glass transition temperature can create glasses with increased density, enthalpy, kinetic stability, and mechanical stability compared to an ordinary glass created by cooling. It is estimated that an ordinary glass would need to age thousands of years to reach the kinetic stability of a vapor deposited glass, and a natural question is how close to the equilibrium is the vapor deposited glass. To understand the process, algorithms akin to vapor deposition are used to create simulated glasses that have a higher kinetic stability than their annealed counterpart, although these glasses may not be well equilibrated either. Here we use novel models optimized for a swap Monte Carlo algorithm in order to create equilibrium glass films and compare their properties with those of glasses obtained from vapor deposition algorithms. This approach allows us to directly assess the non-equilibrium nature of vapor-deposited ultrastable glasses. Simons Collaboration on Cracking the Glass Problem and NSF Grant No. DMR 1608086.

  7. A supportive architecture for CFD-based design optimisation

    Science.gov (United States)

    Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong

    2014-03-01

    Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture

  8. Modeling of the equilibrium of a tokamak plasma

    International Nuclear Information System (INIS)

    Grandgirard, V.

    1999-12-01

    The simulation and the control of a plasma discharge in a tokamak require an efficient and accurate solving of the equilibrium because this equilibrium needs to be calculated again every microsecond to simulate discharges that can last up to 1000 seconds. The purpose of this thesis is to propose numerical methods in order to calculate these equilibrium with acceptable computer time and memory size. Chapter 1 deals with hydrodynamics equation and sets up the problem. Chapter 2 gives a method to take into account the boundary conditions. Chapter 3 is dedicated to the optimization of the inversion of the system matrix. This matrix being quasi-symmetric, the Woodbury method combined with Cholesky method has been used. This direct method has been compared with 2 iterative methods: GMRES (generalized minimal residual) and BCG (bi-conjugate gradient). The 2 last chapters study the control of the plasma equilibrium, this work is presented in the formalism of the optimized control of distributed systems and leads to non-linear equations of state and quadratic functionals that are solved numerically by a quadratic sequential method. This method is based on the replacement of the initial problem with a series of control problems involving linear equations of state. (A.C.)

  9. Optimisation of energy absorbing liner for equestrian helmets. Part II: Functionally graded foam liner

    International Nuclear Information System (INIS)

    Cui, L.; Forero Rueda, M.A.; Gilchrist, M.D.

    2009-01-01

    The energy absorbing liner of safety helmets was optimised using finite element modelling. In this present paper, a functionally graded foam (FGF) liner was modelled, while keeping the average liner density the same as in a corresponding reference single uniform density liner model. Use of a functionally graded foam liner would eliminate issues regarding delamination and crack propagation between interfaces of different density layers which could arise in liners with discrete density variations. As in our companion Part I paper [Forero Rueda MA, Cui L, Gilchrist MD. Optimisation of energy absorbing liner for equestrian helmets. Part I: Layered foam liner. Mater Des [submitted for publication

  10. Development of a modified equilibrium model for biomass pilot-scale fluidized bed gasifier performance predictions

    International Nuclear Information System (INIS)

    Rodriguez-Alejandro, David A.; Nam, Hyungseok; Maglinao, Amado L.; Capareda, Sergio C.; Aguilera-Alvarado, Alberto F.

    2016-01-01

    The objective of this work is to develop a thermodynamic model considering non-stoichiometric restrictions. The model validation was done from experimental works using a bench-scale fluidized bed gasifier with wood chips, dairy manure, and sorghum. The model was used for a further parametric study to predict the performance of a pilot-scale fluidized biomass gasifier. The Gibbs free energy minimization was applied to the modified equilibrium model considering a heat loss to the surroundings, carbon efficiency, and two non-equilibrium factors based on empirical correlations of ER and gasification temperature. The model was in a good agreement with RMS <4 for the produced gas. The parametric study ranges were 0.01 < ER < 0.99 and 500 °C < T < 900 °C to predict syngas concentrations and its LHV (lower heating value) for the optimization. Higher aromatics in tar were contained in WC gasification compared to manure gasification. A wood gasification tar simulation was produced to predict the amount of tars at specific conditions. The operating conditions for the highest quality syngas were reconciled experimentally with three biomass wastes using a fluidized bed gasifier. The thermodynamic model was used to predict the gasification performance at conditions beyond the actual operation. - Highlights: • Syngas from experimental gasification was used to create a non-equilibrium model. • Different types of biomass (HTS, DM, and WC) were used for gasification modelling. • Different tar compositions were identified with a simulation of tar yields. • The optimum operating conditions were found through the developed model.

  11. Identification of the mechanical behaviour of biopolymer composites using multistart optimisation technique

    KAUST Repository

    Brahim, Elhacen

    2013-10-01

    This paper aims at identifying the mechanical behaviour of starch-zein composites as a function of zein content using a novel optimisation technique. Starting from bending experiments, force-deflection response is used to derive adequate mechanical parameters representing the elastic-plastic behaviour of the studied material. For such a purpose, a finite element model is developed accounting for a simple hardening rule, namely isotropic hardening model. A deterministic optimisation strategy is implemented to provide rapid matching between parameters of the constitutive law and the observed behaviour. Results are discussed based on the robustness of the numerical approach and predicted tendencies with regards to the role of zein content. © 2013 Elsevier Ltd.

  12. Design and optimisation of dual-mode heat pump systems using natural fluids

    International Nuclear Information System (INIS)

    Zhang Wenling; Klemeš, Jiří Jaromír; Kim, Jin-Kuk

    2012-01-01

    The paper introduces new multi-period modelling and design methodology for dual-mode heat pumps using natural fluids. First, a mathematical model is developed to capture thermodynamic and operating characteristics of dual-mode heat pump systems, subject to different ambient temperatures. The multi-period optimisation framework has been developed to reflect different ambient conditions and its influences on heat pump performance, as well as to determine a system capacity of heat pump which allows systematic economic trade-offs between supplementary heating (or cooling) and operating cost for heat pump. Case study considering three geographical locations with different heating and cooling demands is presented to illustrate the importance of using multi-period optimisation for the design of heat pump systems.

  13. Observation of non-chemical equilibrium effect on Ar-CO2-H2 thermal plasma model by changing pressure

    International Nuclear Information System (INIS)

    Al-Mamun, Sharif Abdullah; Tanaka, Yasunori; Uesugi, Yoshihiko

    2009-01-01

    The authors developed a two-dimensional one-temperature chemical non-equilibrium (1T-NCE) model of Ar-CO 2 -H 2 inductively coupled thermal plasmas (ICTP) to investigate the effect of pressure variation. The basic concept of one-temperature model is the assumption and treatment of the same energy conservation equation for electrons and heavy particles. The energy conservation equations consider reaction heat effects and energy transfer among the species produced as well as enthalpy flow resulting from diffusion. Assuming twenty two (22) different particles in this model and by solving mass conservation equations for each particle, considering diffusion, convection and net production terms resulting from hundred and ninety eight (198) chemical reactions, chemical non-equilibrium effects were taken into account. Transport and thermodynamic properties of Ar-CO 2 -H 2 thermal plasmas were self-consistently calculated using the first-order approximation of the Chapman-Enskog method. Finally results obtained at atmospheric pressure (760 Torr) and at reduced pressure (500, 300 Torr) were compared with results from one-temperature chemical equilibrium (1T-CE) model. And of course, this comparison supported discussion of chemical non-equilibrium effects in the inductively coupled thermal plasmas (ICTP).

  14. Approach to transverse equilibrium in axial channeling

    International Nuclear Information System (INIS)

    Fearick, R.W.

    2000-01-01

    Analytical treatments of channeling rely on the assumption of equilibrium on the transverse energy shell. The approach to equilibrium, and the nature of the equilibrium achieved, is examined using solutions of the equations of motion in the continuum multi-string model. The results show that the motion is chaotic in the absence of dissipative processes, and a complicated structure develops in phase space which prevent the development of the simple equilibrium usually assumed. The role of multiple scattering in smoothing out the equilibrium distribution is investigated

  15. Equilibrium modeling of the TFCX poloidal field coil system

    International Nuclear Information System (INIS)

    Strickler, D.J.; Miller, J.B.; Rothe, K.E.; Peng, Y.K.M.

    1984-04-01

    The Toroidal Fusion Core Experiment (TFCX) isproposed to be an ignition device with a low safety factor (q approx. = 2.0), rf or rf-assisted startup, long inductive burn pulse (approx. 300 s), and an elongated plasma cross section (kappa = 1.6) with moderate triangularity (delta = 0.3). System trade studies have been carried out to assist in choosing an appropriate candidate for TFCX conceptual design. This report describes an important element in these system studies - the magnetohydrodynamic (MHD) equilibrium modeling of the TFCX poloidal field (PF) coil system and its impact on the choice of machine size. Reference design points for the all-super-conducting toroidal field (TF) coil (TFCX-S) and hybrid (TFCX-H) options are presented that satisfy given PF system criteria, including volt-second requirements during burn, mechanical configuration constraints, maximum field constraints at the superconducting PF coils, and plasma shape parameters. Poloidal coil current waveforms for the TFCX-S and TFCX-H reference designs consistent with the equilibrium requirements of the plasma startup, heating, and burn phases of a typical discharge scenario are calculated. Finally, a possible option for quasi-steady-state operation is discussed

  16. Optimisation of X-ray examinations: General principles and an Irish perspective

    International Nuclear Information System (INIS)

    Matthews, Kate; Brennan, Patrick C.

    2009-01-01

    In Ireland, the European Medical Exposures Directive [Council Directive 97/43] was enacted into national law in Statutory Instrument 478 of 2002. This series of three review articles discusses the status of justification and optimisation of X-ray examinations nationally, and progress with the establishment of Irish diagnostic reference levels. In this second article, literature relating to optimisation issues arising in SI 478 of 2002 is reviewed. Optimisation associated with X-ray equipment and optimisation during day-to-day practice are considered. Optimisation proposals found in published research are summarised, and indicate the complex nature of optimisation. A paucity of current, research-based guidance documentation is identified. This is needed in order to support a range of professional staff in their practical implementation of optimisation.

  17. A constriction factor based particle swarm optimisation algorithm to solve the economic dispatch problem including losses

    Energy Technology Data Exchange (ETDEWEB)

    Young, Steven; Montakhab, Mohammad; Nouri, Hassan

    2011-07-15

    Economic dispatch (ED) is one of the most important problems to be solved in power generation as fractional percentage fuel reductions represent significant cost savings. ED wishes to optimise the power generated by each generating unit in a system in order to find the minimum operating cost at a required load demand, whilst ensuring both equality and inequality constraints are met. For the process of optimisation, a model must be created for each generating unit. The particle swarm optimisation technique is an evolutionary computation technique with one of the most powerful methods for solving global optimisation problems. The aim of this paper is to add in a constriction factor to the particle swarm optimisation algorithm (CFBPSO). Results show that the algorithm is very good at solving the ED problem and that CFBPSO must be able to work in a practical environment and so a valve point effect with transmission losses should be included in future work.

  18. Optimising culture medium for producing the yeast Pichia onychis (Lv027

    Directory of Open Access Journals (Sweden)

    Andrés Díaz

    2005-01-01

    Full Text Available Optimising Pichia onychis yeast biomass production was evaluated using different substrates and different physicochemical conditions for liquid fermentation. The Plackett-Burman statistical design was initially applied for screening the most important nutritional variables (three carbon sources and eight nitrogen sources affecting yeast biomass production. Four nutritional sources and two physicochemical variables were subsequently evaluated using a factorial fractionated design as the starting point for optimising the process by applying a central composite rotational design. The results obtained f rom employing a polynomial regression model using the experimental data showed that biomass production was strongly affected by nutritional and physicochemical conditions. The highest yield was obtained in the following conditions: 43,42 g/L carbon source, 0,261 g/L nitrogen organic source, shaking at 110 rpm, 6,0 pH, 48 h total fermentation time during which 8,95 XlO9 cells/mL were obtained, equivalent to 6,30 g/L dry biomass. Key words: Pichia onychis, optimisation, liquid fermentation.

  19. Assessment of grid optimisation measures for the German transmission grid using open source grid data

    Science.gov (United States)

    Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.

    2018-02-01

    The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.

  20. Evaluation and optimisation of phenomenological multi-step soot model for spray combustion under diesel engine-like operating conditions

    Science.gov (United States)

    Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song; Schramm, Jesper

    2015-05-01

    In this work, a two-dimensional computational fluid dynamics study is reported of an n-heptane combustion event and the associated soot formation process in a constant volume combustion chamber. The key interest here is to evaluate the sensitivity of the chemical kinetics and submodels of a semi-empirical soot model in predicting the associated events. Numerical computation is performed using an open-source code and a chemistry coordinate mapping approach is used to expedite the calculation. A library consisting of various phenomenological multi-step soot models is constructed and integrated with the spray combustion solver. Prior to the soot modelling, combustion simulations are carried out. Numerical results show that the ignition delay times and lift-off lengths exhibit good agreement with the experimental measurements across a wide range of operating conditions, apart from those in the cases with ambient temperature lower than 850 K. The variation of the soot precursor production with respect to the change of ambient oxygen levels qualitatively agrees with that of the conceptual models when the skeletal n-heptane mechanism is integrated with a reduced pyrene chemistry. Subsequently, a comprehensive sensitivity analysis is carried out to appraise the existing soot formation and oxidation submodels. It is revealed that the soot formation is captured when the surface growth rate is calculated using a square root function of the soot specific surface area and when a pressure-dependent model constant is considered. An optimised soot model is then proposed based on the knowledge gained through this exercise. With the implementation of optimised model, the simulated soot onset and transport phenomena before reaching quasi-steady state agree reasonably well with the experimental observation. Also, variation of spatial soot distribution and soot mass produced at oxygen molar fractions ranging from 10.0 to 21.0% for both low and high density conditions are reproduced.

  1. Modeling chromatographic columns. Non-equilibrium packed-bed adsorption with non-linear adsorption isotherms

    NARCIS (Netherlands)

    Özdural, A.R.; Alkan, A.; Kerkhof, P.J.A.M.

    2004-01-01

    In this work a new mathematical model, based on non-equilibrium conditions, describing the dynamic adsorption of proteins in columns packed with spherical adsorbent particles is used to study the performance of chromatographic systems. Simulations of frontal chromatography, including axial

  2. Numerical analysis and optimisation of heavy water upgrading column

    International Nuclear Information System (INIS)

    Sankar, Rama; Ghosh, Brindaban; Bhanja, K.

    2013-01-01

    In the 'Pressurised Heavy Water' type of reactors, heavy water is used both as moderator and coolant. During operation of reactor downgraded heavy water is generated that needs to be upgraded for reuse in the reactor. When the isotopic purity of heavy water becomes less than 99.75%, it is termed as downgraded heavy water. Downgraded heavy water also contains impurity such as corrosion products, dirt, oil etc. Upgradation of downgraded heavy water is normally done in two steps: (i) Purification: In this step downgraded heavy water is first purified to remove corrosion products, dirt, oil, etc. and (ii) Upgradation of heavy water to increase its isotopic purity, this step is carried out by vacuum distillation of downgraded heavy water after purification. This project is aimed at mathematical modelling and numerical simulation of heavy water upgrading column. Modelling and simulation studies of the upgradation column are based on equilibrium stage model to evaluate the effect of feed location, pressure, feed composition, reflux ratio in the packed column for given reboiler and condenser duty of distillation column. State to stage modelling of two-phase two-component flow has constitutes the overall modelling of the column. The governing equations consist of stage-wise species and overall mass continuity and stage-wise energy balance. This results in tridigonal matrix equation for stage liquid fractions for heavy and light water. The stage-wise liquid flow rates and temperatures are governed by stage-wise mass and energy balance. The combined form of the corresponding governing equations, with the incorporation of thermodynamic equation of states, form a system of nonlinear equations. This system have been resolved numerically using modified Newton-Raphson method. A code in the MATLAB platform has been developed by on above numerical procedure. The optimisation of the column operating conditions is to be carried out based on parametric studies and analysis of different

  3. Modeling equilibrium adsorption of organic micropollutants onto activated carbon

    KAUST Repository

    De Ridder, David J.

    2010-05-01

    Solute hydrophobicity, polarizability, aromaticity and the presence of H-bond donor/acceptor groups have been identified as important solute properties that affect the adsorption on activated carbon. However, the adsorption mechanisms related to these properties occur in parallel, and their respective dominance depends on the solute properties as well as carbon characteristics. In this paper, a model based on multivariate linear regression is described that was developed to predict equilibrium carbon loading on a specific activated carbon (F400) for solutes reflecting a wide range of solute properties. In order to improve prediction accuracy, groups (bins) of solutes with similar solute properties were defined and solute removals were predicted for each bin separately. With these individual linear models, coefficients of determination (R2) values ranging from 0.61 to 0.84 were obtained. With the mechanistic approach used in developing this predictive model, a strong relation with adsorption mechanisms is established, improving the interpretation and, ultimately, acceptance of the model. © 2010 Elsevier Ltd.

  4. Optimised performance of a plug-in electric vehicle aggregator in energy and reserve markets

    International Nuclear Information System (INIS)

    Shafie-khah, M.; Moghaddam, M.P.; Sheikh-El-Eslami, M.K.; Catalão, J.P.S.

    2015-01-01

    Highlights: • A new model is developed to optimise the performance of a PEV aggregator in the power market. • PEVs aggregator can combine the PEVs and manage the charge/discharge of their batteries. • A new approach to calculate the satisfaction/motivation of PEV owners is proposed. • Several uncertainties are taken into account using a two-stage stochastic programing approach. • The proposed model is proficient in significantly improving the short- and long-term behaviour. - Abstract: In this paper, a new model is developed to optimise the performance of a plug-in Electric Vehicle (EV) aggregator in electricity markets, considering both short- and long-term horizons. EV aggregator as a new player of the power market can aggregate the EVs and manage the charge/discharge of their batteries. The aggregator maximises the profit and optimises EV owners’ revenue by applying changes in tariffs to compete with other market players for retaining current customers and acquiring new owners. On this basis, a new approach to calculate the satisfaction/motivation of EV owners and their market participation is proposed in this paper. Moreover, the behaviour of owners to select their supplying company is considered. The aggregator optimises the self-scheduling programme and submits the best bidding/offering strategies to the day-ahead and real-time markets. To achieve this purpose, the day-ahead and real-time energy and reserve markets are modelled as oligopoly markets, in contrast with previous works that utilised perfectly competitive ones. Furthermore, several uncertainties and constraints are taken into account using a two-stage stochastic programing approach, which have not been addressed in previous works. The numerical studies show the effectiveness of the proposed model

  5. Thermodynamic evolution far from equilibrium

    Science.gov (United States)

    Khantuleva, Tatiana A.

    2018-05-01

    The presented model of thermodynamic evolution of an open system far from equilibrium is based on the modern results of nonequilibrium statistical mechanics, the nonlocal theory of nonequilibrium transport developed by the author and the Speed Gradient principle introduced in the theory of adaptive control. Transition to a description of the system internal structure evolution at the mesoscopic level allows a new insight at the stability problem of non-equilibrium processes. The new model is used in a number of specific tasks.

  6. Comparison of Themodynamic and Transport Property Models for Computing Equilibrium High Enthalpy Flows

    Science.gov (United States)

    Ramasahayam, Veda Krishna Vyas; Diwakar, Anant; Bodi, Kowsik

    2017-11-01

    To study the flow of high temperature air in vibrational and chemical equilibrium, accurate models for thermodynamic state and transport phenomena are required. In the present work, the performance of a state equation model and two mixing rules for determining equilibrium air thermodynamic and transport properties are compared with that of curve fits. The thermodynamic state model considers 11 species which computes flow chemistry by an iterative process and the mixing rules considered for viscosity are Wilke and Armaly-Sutton. The curve fits of Srinivasan, which are based on Grabau type transition functions, are chosen for comparison. A two-dimensional Navier-Stokes solver is developed to simulate high enthalpy flows with numerical fluxes computed by AUSM+-up. The accuracy of state equation model and curve fits for thermodynamic properties is determined using hypersonic inviscid flow over a circular cylinder. The performance of mixing rules and curve fits for viscosity are compared using hypersonic laminar boundary layer prediction on a flat plate. It is observed that steady state solutions from state equation model and curve fits match with each other. Though curve fits are significantly faster the state equation model is more general and can be adapted to any flow composition.

  7. Numerical equilibrium analysis for structured consumer resource models.

    Science.gov (United States)

    de Roos, A M; Diekmann, O; Getto, P; Kirkilionis, M A

    2010-02-01

    In this paper, we present methods for a numerical equilibrium and stability analysis for models of a size structured population competing for an unstructured resource. We concentrate on cases where two model parameters are free, and thus existence boundaries for equilibria and stability boundaries can be defined in the (two-parameter) plane. We numerically trace these implicitly defined curves using alternatingly tangent prediction and Newton correction. Evaluation of the maps defining the curves involves integration over individual size and individual survival probability (and their derivatives) as functions of individual age. Such ingredients are often defined as solutions of ODE, i.e., in general only implicitly. In our case, the right-hand sides of these ODE feature discontinuities that are caused by an abrupt change of behavior at the size where juveniles are assumed to turn adult. So, we combine the numerical solution of these ODE with curve tracing methods. We have implemented the algorithms for "Daphnia consuming algae" models in C-code. The results obtained by way of this implementation are shown in the form of graphs.

  8. First-principles atomistic Wulff constructions for an equilibrium rutile TiO2 shape modeling

    Science.gov (United States)

    Jiang, Fengzhou; Yang, Lei; Zhou, Dali; He, Gang; Zhou, Jiabei; Wang, Fanhou; Chen, Zhi-Gang

    2018-04-01

    Identifying the exposed surfaces of rutile TiO2 crystal is crucial for its industry application and surface engineering. In this study, the shape of the rutile TiO2 was constructed by applying equilibrium thermodynamics of TiO2 crystals via first-principles density functional theory (DFT) and Wulff principles. From the DFT calculations, the surface energies of six low-index stoichiometric facets of TiO2 are determined after the calibrations of crystal structure. And then, combined surface energy calculations and Wulff principles, a geometric model of equilibrium rutile TiO2 is built up, which is coherent with the typical morphology of fully-developed equilibrium TiO2 crystal. This study provides fundamental theoretical guidance for the surface analysis and surface modification of the rutile TiO2-based materials from experimental research to industry manufacturing.

  9. Modeling of the (liquid + liquid) equilibrium of polydisperse hyperbranched polymer solutions by lattice-cluster theory

    International Nuclear Information System (INIS)

    Enders, Sabine; Browarzik, Dieter

    2014-01-01

    Graphical abstract: - Highlights: • Calculation of the (liquid + liquid) equilibrium of hyperbranched polymer solutions. • Description of branching effects by the lattice-cluster theory. • Consideration of self- and cross association by chemical association models. • Treatment of the molar-mass polydispersity by the use of continuous thermodynamics. • Improvement of the theoretical results by the incorporation of polydispersity. - Abstract: The (liquid + liquid) equilibrium of solutions of hyperbranched polymers of the Boltorn type is modeled in the framework of lattice-cluster theory. The association effects are described by the chemical association models CALM (for self association) and ECALM (for cross association). For the first time the molar mass polydispersity of the hyperbranched polymers is taken into account. For this purpose continuous thermodynamics is applied. Because the segment-molar excess Gibbs free energy depends on the number average of the segment number of the polymer the treatment is more general than in previous papers on continuous thermodynamics. The polydispersity is described by a generalized Schulz–Flory distribution. The calculation of the cloud-point curve reduces to two equations that have to be numerically solved. Conditions for the calculation of the spinodal curve and of the critical point are derived. The calculated results are compared to experimental data taken from the literature. For Boltorn solutions in non-polar solvents the polydispersity influence is small. In all other of the considered cases polydispersity influences the (liquid + liquid) equilibrium considerably. However, association and polydispersity influence phase equilibrium in a complex manner. Taking polydispersity into account the accuracy of the calculations is improved, especially, in the diluted region

  10. Optimisation by hierarchical search

    Science.gov (United States)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  11. The Extended Generalized Cost Concept and its Application in Freight Transport and General Equilibrium Modeling

    NARCIS (Netherlands)

    Tavasszy, L.; Davydenko, I.; Ruijgrok, K.

    2009-01-01

    The integration of Spatial Equilibrium models and Freight transport network models is important to produce consistent scenarios for future freight transport demand. At various spatial scales, we see the changes in production, trade, logistics networking and transportation, being driven by

  12. Predicting paddlefish roe yields using an extension of the Beverton–Holt equilibrium yield-per-recruit model

    Science.gov (United States)

    Colvin, M.E.; Bettoli, Phillip William; Scholten, G.D.

    2013-01-01

    Equilibrium yield models predict the total biomass removed from an exploited stock; however, traditional yield models must be modified to simulate roe yields because a linear relationship between age (or length) and mature ovary weight does not typically exist. We extended the traditional Beverton-Holt equilibrium yield model to predict roe yields of Paddlefish Polyodon spathula in Kentucky Lake, Tennessee-Kentucky, as a function of varying conditional fishing mortality rates (10-70%), conditional natural mortality rates (cm; 9% and 18%), and four minimum size limits ranging from 864 to 1,016mm eye-to-fork length. These results were then compared to a biomass-based yield assessment. Analysis of roe yields indicated the potential for growth overfishing at lower exploitation rates and smaller minimum length limits than were suggested by the biomass-based assessment. Patterns of biomass and roe yields in relation to exploitation rates were similar regardless of the simulated value of cm, thus indicating that the results were insensitive to changes in cm. Our results also suggested that higher minimum length limits would increase roe yield and reduce the potential for growth overfishing and recruitment overfishing at the simulated cm values. Biomass-based equilibrium yield assessments are commonly used to assess the effects of harvest on other caviar-based fisheries; however, our analysis demonstrates that such assessments likely underestimate the probability and severity of growth overfishing when roe is targeted. Therefore, equilibrium roe yield-per-recruit models should also be considered to guide the management process for caviar-producing fish species.

  13. Equilibrium Analysis of a Yellow Fever Dynamical Model with Vaccination

    Directory of Open Access Journals (Sweden)

    Silvia Martorano Raimundo

    2015-01-01

    Full Text Available We propose an equilibrium analysis of a dynamical model of yellow fever transmission in the presence of a vaccine. The model considers both human and vector populations. We found thresholds parameters that affect the development of the disease and the infectious status of the human population in the presence of a vaccine whose protection may wane over time. In particular, we derived a threshold vaccination rate, above which the disease would be eradicated from the human population. We show that if the mortality rate of the mosquitoes is greater than a given threshold, then the disease is naturally (without intervention eradicated from the population. In contrast, if the mortality rate of the mosquitoes is less than that threshold, then the disease is eradicated from the populations only when the growing rate of humans is less than another threshold; otherwise, the disease is eradicated only if the reproduction number of the infection after vaccination is less than 1. When this reproduction number is greater than 1, the disease will be eradicated from the human population if the vaccination rate is greater than a given threshold; otherwise, the disease will establish itself among humans, reaching a stable endemic equilibrium. The analysis presented in this paper can be useful, both to the better understanding of the disease dynamics and also for the planning of vaccination strategies.

  14. Out-of-equilibrium dynamics in a Gaussian trap model

    International Nuclear Information System (INIS)

    Diezemann, Gregor

    2007-01-01

    The violations of the fluctuation-dissipation theorem are analysed for a trap model with a Gaussian density of states. In this model, the system reaches thermal equilibrium for long times after a quench to any finite temperature and therefore all ageing effect are of a transient nature. For not too long times after the quench it is found that the so-called fluctuation-dissipation ratio tends to a non-trivial limit, thus indicating the possibility for the definition of a timescale-dependent effective temperature. However, different definitions of the effective temperature yield distinct results. In particular, plots of the integrated response versus the correlation function strongly depend on the way they are constructed. Also the definition of effective temperatures in the frequency domain is not unique for the model considered. This may have some implications for the interpretation of results from computer simulations and experimental determinations of effective temperatures

  15. Interface model conditions for a non-equilibrium heat transfer model for conjugate fluid/porous/solid domains

    International Nuclear Information System (INIS)

    Betchen, L.J.; Straatman, A.G.

    2005-01-01

    A mathematical and numerical model for the treatment of conjugate fluid flow and heat transfer problems in domains containing pure fluid, porous, and pure solid regions has been developed. The model is general and physically reasoned, and allows for local thermal non-equilibrium in the porous region. The model is developed for implementation on a simple collocated finite volume grid. Of particular novelty are the conditions implemented at the interfaces between porous regions, and those containing a pure solid or pure fluid. The model is validated by simulation of a three-dimensional porous plug problem for which experimental results are available. (author)

  16. Analytical modeling of equilibrium of strongly anisotropic plasma in tokamaks and stellarators

    International Nuclear Information System (INIS)

    Lepikhin, N. D.; Pustovitov, V. D.

    2013-01-01

    Theoretical analysis of equilibrium of anisotropic plasma in tokamaks and stellarators is presented. The anisotropy is assumed strong, which includes the cases with essentially nonuniform distributions of plasma pressure on magnetic surfaces. Such distributions can arise at neutral beam injection or at ion cyclotron resonance heating. Then the known generalizations of the standard theory of plasma equilibrium that treat p ‖ and p ⊥ (parallel and perpendicular plasma pressures) as almost constant on magnetic surfaces are not applicable anymore. Explicit analytical prescriptions of the profiles of p ‖ and p ⊥ are proposed that allow modeling of the anisotropic plasma equilibrium even with large ratios of p ‖ /p ⊥ or p ⊥ /p ‖ . A method for deriving the equation for the Shafranov shift is proposed that does not require introduction of the flux coordinates and calculation of the metric tensor. It is shown that for p ⊥ with nonuniformity described by a single poloidal harmonic, the equation for the Shafranov shift coincides with a known one derived earlier for almost constant p ⊥ on a magnetic surface. This does not happen in the other more complex case

  17. A Metastable Equilibrium Model for the Relative Abundances of Microbial Phyla in a Hot Spring

    Science.gov (United States)

    Dick, Jeffrey M.; Shock, Everett L.

    2013-01-01

    Many studies link the compositions of microbial communities to their environments, but the energetics of organism-specific biomass synthesis as a function of geochemical variables have rarely been assessed. We describe a thermodynamic model that integrates geochemical and metagenomic data for biofilms sampled at five sites along a thermal and chemical gradient in the outflow channel of the hot spring known as “Bison Pool” in Yellowstone National Park. The relative abundances of major phyla in individual communities sampled along the outflow channel are modeled by computing metastable equilibrium among model proteins with amino acid compositions derived from metagenomic sequences. Geochemical conditions are represented by temperature and activities of basis species, including pH and oxidation-reduction potential quantified as the activity of dissolved hydrogen. By adjusting the activity of hydrogen, the model can be tuned to closely approximate the relative abundances of the phyla observed in the community profiles generated from BLAST assignments. The findings reveal an inverse relationship between the energy demand to form the proteins at equal thermodynamic activities and the abundance of phyla in the community. The distance from metastable equilibrium of the communities, assessed using an equation derived from energetic considerations that is also consistent with the information-theoretic entropy change, decreases along the outflow channel. Specific divergences from metastable equilibrium, such as an underprediction of the relative abundances of phototrophic organisms at lower temperatures, can be explained by considering additional sources of energy and/or differences in growth efficiency. Although the metabolisms used by many members of these communities are driven by chemical disequilibria, the results support the possibility that higher-level patterns of chemotrophic microbial ecosystems are shaped by metastable equilibrium states that depend on both the

  18. Thermodynamic optimisation and analysis of four Kalina cycle layouts for high temperature applications

    International Nuclear Information System (INIS)

    Modi, Anish; Haglind, Fredrik

    2015-01-01

    The Kalina cycle has seen increased interest in the last few years as an efficient alternative to the conventional steam Rankine cycle. However, the available literature gives little information on the algorithms to solve or optimise this inherently complex cycle. This paper presents a detailed approach to solve and optimise a Kalina cycle for high temperature (a turbine inlet temperature of 500 °C) and high pressure (over 100 bar) applications using a computationally efficient solution algorithm. A central receiver solar thermal power plant with direct steam generation was considered as a case study. Four different layouts for the Kalina cycle based on the number and/or placement of the recuperators in the cycle were optimised and compared based on performance parameters such as the cycle efficiency and the cooling water requirement. The cycles were modelled in steady state and optimised with the maximisation of the cycle efficiency as the objective function. It is observed that the different cycle layouts result in different regions for the optimal value of the turbine inlet ammonia mass fraction. Out of the four compared layouts, the most complex layout KC1234 gives the highest efficiency. The cooling water requirement is closely related to the cycle efficiency, i.e., the better the efficiency, the lower is the cooling water requirement. - Highlights: • Detailed methodology for solving and optimising Kalina cycle for high temperature applications. • A central receiver solar thermal power plant with direct steam generation considered as a case study. • Four Kalina cycle layouts based on the placement of recuperators optimised and compared

  19. Equilibrium in a random viewer model of television broadcasting

    DEFF Research Database (Denmark)

    Hansen, Bodil Olai; Keiding, Hans

    2014-01-01

    The authors considered a model of commercial television market with advertising with probabilistic viewer choice of channel, where private broadcasters may coexist with a public television broadcaster. The broadcasters influence the probability of getting viewer attention through the amount...... number of channels. The authors derive properties of equilibrium in an oligopolistic market with private broadcasters and show that the number of firms has a negative effect on overall advertising and viewer satisfaction. If there is a public channel that also sells advertisements but does not maximize...... profits, this will have a positive effect on advertiser and viewer satisfaction....

  20. Model of opacity and emissivity of non-equilibrium plasma

    International Nuclear Information System (INIS)

    Politov V Y

    2008-01-01

    In this work the model describing absorption and emission properties of the non-equilibrium plasma is presented. It is based on the kinetics equations for populations of the ground, singly and doubly excited states of multi-charged ions. After solving these equations, the states populations together with the spectroscopic data, supplied in the special database for a lot ionization stages, are used for building the spectral distributions of plasma opacity and emissivity in STA approximation. Results of kinetics simulation are performed for such important X-ray converter as gold, which is investigated intensively in ICF-experiments

  1. Deviations from mass transfer equilibrium and mathematical modeling of mixer-settler contactors

    International Nuclear Information System (INIS)

    Beyerlein, A.L.; Geldard, J.F.; Chung, H.F.; Bennett, J.E.

    1980-01-01

    This paper presents the mathematical basis for the computer model PUBG of mixer-settler contactors which accounts for deviations from mass transfer equilibrium. This is accomplished by formulating the mass balance equations for the mixers such that the mass transfer rate of nuclear materials between the aqueous and organic phases is accounted for. 19 refs

  2. Modified cuckoo search: A new gradient free optimisation algorithm

    International Nuclear Information System (INIS)

    Walton, S.; Hassan, O.; Morgan, K.; Brown, M.R.

    2011-01-01

    Highlights: → Modified cuckoo search (MCS) is a new gradient free optimisation algorithm. → MCS shows a high convergence rate, able to outperform other optimisers. → MCS is particularly strong at high dimension objective functions. → MCS performs well when applied to engineering problems. - Abstract: A new robust optimisation algorithm, which can be regarded as a modification of the recently developed cuckoo search, is presented. The modification involves the addition of information exchange between the top eggs, or the best solutions. Standard optimisation benchmarking functions are used to test the effects of these modifications and it is demonstrated that, in most cases, the modified cuckoo search performs as well as, or better than, the standard cuckoo search, a particle swarm optimiser, and a differential evolution strategy. In particular the modified cuckoo search shows a high convergence rate to the true global minimum even at high numbers of dimensions.

  3. Phase equilibrium of North Sea oils with polar chemicals: Experiments and CPA modeling

    DEFF Research Database (Denmark)

    Frost, Michael Grynnerup; Kontogeorgis, Georgios M.; von Solms, Nicolas

    2016-01-01

    This work consists of a combined experimental and modeling study for oil - MEG - water systems, of relevance to petroleum applications. We present new experimental liquid-liquid equilibrium data for the mutual solubility of two North Sea oils + MEG and North Sea oils + MEG + water systems...

  4. Energy, economy and equity interactions in a CGE [Computable General Equilibrium] model for Pakistan

    International Nuclear Information System (INIS)

    Naqvi, Farzana

    1997-01-01

    In the last three decades, Computable General Equilibrium modelling has emerged as an established field of applied economics. This book presents a CGE model developed for Pakistan with the hope that it will lay down a foundation for application of general equilibrium modelling for policy formation in Pakistan. As the country is being driven swiftly to become an open market economy, it becomes vital to found out the policy measures that can foster the objectives of economic planning, such as social equity, with the minimum loss of the efficiency gains from the open market resource allocations. It is not possible to build a model for practical use that can do justice to all sectors of the economy in modelling of their peculiar features. The CGE model developed in this book focuses on the energy sector. Energy is considered as one of the basic needs and an essential input to economic growth. Hence, energy policy has multiple criteria to meet. In this book, a case study has been carried out to analyse energy pricing policy in Pakistan using this CGE model of energy, economy and equity interactions. Hence, the book also demonstrates how researchers can model the fine details of one sector given the core structure of a CGE model. (UK)

  5. A Quantal Response Statistical Equilibrium Model of Induced Technical Change in an Interactive Factor Market: Firm-Level Evidence in the EU Economies

    Directory of Open Access Journals (Sweden)

    Jangho Yang

    2018-02-01

    Full Text Available This paper studies the pattern of technical change at the firm level by applying and extending the Quantal Response Statistical Equilibrium model (QRSE. The model assumes that a large number of cost minimizing firms decide whether to adopt a new technology based on the potential rate of cost reduction. The firm in the model is assumed to have a limited capacity to process market signals so there is a positive degree of uncertainty in adopting a new technology. The adoption decision by the firm, in turn, makes an impact on the whole market through changes in the factor-price ratio. The equilibrium distribution of the model is a unimodal probability distribution with four parameters, which is qualitatively different from the Walrasian notion of equilibrium in so far as the state of equilibrium is not a single state but a probability distribution of multiple states. This paper applies Bayesian inference to estimate the unknown parameters of the model using the firm-level data of seven advanced OECD countries over eight years and shows that the mentioned equilibrium distribution from the model can satisfactorily recover the observed pattern of technical change.

  6. Optimisation of phase ratio in the triple jump using computer simulation.

    Science.gov (United States)

    Allen, Sam J; King, Mark A; Yeadon, M R Fred

    2016-04-01

    The triple jump is an athletic event comprising three phases in which the optimal proportion of each phase to the total distance jumped, termed the phase ratio, is unknown. This study used a whole-body torque-driven computer simulation model of all three phases of the triple jump to investigate optimal technique. The technique of the simulation model was optimised by varying torque generator activation parameters using a Genetic Algorithm in order to maximise total jump distance, resulting in a hop-dominated technique (35.7%:30.8%:33.6%) and a distance of 14.05m. Optimisations were then run with penalties forcing the model to adopt hop and jump phases of 33%, 34%, 35%, 36%, and 37% of the optimised distance, resulting in total distances of: 13.79m, 13.87m, 13.95m, 14.05m, and 14.02m; and 14.01m, 14.02m, 13.97m, 13.84m, and 13.67m respectively. These results indicate that in this subject-specific case there is a plateau in optimum technique encompassing balanced and hop-dominated techniques, but that a jump-dominated technique is associated with a decrease in performance. Hop-dominated techniques are associated with higher forces than jump-dominated techniques; therefore optimal phase ratio may be related to a combination of strength and approach velocity. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Merchandise and Replenishment Planning Optimisation for Fashion Retail

    Directory of Open Access Journals (Sweden)

    Raffaele Iannone

    2013-08-01

    Full Text Available The integration among different companies functions, collaborative planning and the elaboration of focused distribution plans are critical to the success of each kind of company working in the complex retail sector. In this contest, the present work proposes the description of a model able to support coordinated strategic choices continually made by Supply Chain (SC actors. The final objective is achievement of the full optimisation of Merchandise & Replenishment Planning phases, identifying the right replenishment quantities and periods. To test the proposed model’s effectiveness, it was applied to an important Italian fashion company in the complex field of fast-fashion, a sector in which promptness is a main competitive leverage and, therefore, the planning cannot exclude the time variable. The passage from a total push strategy, currently used by the company, to a push-pull one, suggested by the model, allowed us not only to estimate a reduction in goods quantities to purchase at the beginning of a sales period (with considerable economic savings, but also elaborate a focused replenishment plan that permits reduction and optimisation of departures from network warehouses to Points of Sale (POS.

  8. Non-Equilibrium Heavy Flavored Hadron Yields from Chemical Equilibrium Strangeness-Rich QGP

    OpenAIRE

    Kuznetsova, Inga; Rafelski, Johann

    2008-01-01

    The yields of heavy flavored hadrons emitted from strangeness-rich QGP are evaluated within chemical non-equilibrium statistical hadronization model, conserving strangeness, charm, and entropy yields at hadronization.

  9. Optimisation of milling parameters using neural network

    Directory of Open Access Journals (Sweden)

    Lipski Jerzy

    2017-01-01

    Full Text Available The purpose of this study was to design and test an intelligent computer software developed with the purpose of increasing average productivity of milling not compromising the design features of the final product. The developed system generates optimal milling parameters based on the extent of tool wear. The introduced optimisation algorithm employs a multilayer model of a milling process developed in the artificial neural network. The input parameters for model training are the following: cutting speed vc, feed per tooth fz and the degree of tool wear measured by means of localised flank wear (VB3. The output parameter is the surface roughness of a machined surface Ra. Since the model in the neural network exhibits good approximation of functional relationships, it was applied to determine optimal milling parameters in changeable tool wear conditions (VB3 and stabilisation of surface roughness parameter Ra. Our solution enables constant control over surface roughness parameters and productivity of milling process after each assessment of tool condition. The recommended parameters, i.e. those which applied in milling ensure desired surface roughness and maximal productivity, are selected from all the parameters generated by the model. The developed software may constitute an expert system supporting a milling machine operator. In addition, the application may be installed on a mobile device (smartphone, connected to a tool wear diagnostics instrument and the machine tool controller in order to supply updated optimal parameters of milling. The presented solution facilitates tool life optimisation and decreasing tool change costs, particularly during prolonged operation.

  10. Local Equilibrium and Retardation Revisited.

    Science.gov (United States)

    Hansen, Scott K; Vesselinov, Velimir V

    2018-01-01

    In modeling solute transport with mobile-immobile mass transfer (MIMT), it is common to use an advection-dispersion equation (ADE) with a retardation factor, or retarded ADE. This is commonly referred to as making the local equilibrium assumption (LEA). Assuming local equilibrium, Eulerian textbook treatments derive the retarded ADE, ostensibly exactly. However, other authors have presented rigorous mathematical derivations of the dispersive effect of MIMT, applicable even in the case of arbitrarily fast mass transfer. We resolve the apparent contradiction between these seemingly exact derivations by adopting a Lagrangian point of view. We show that local equilibrium constrains the expected time immobile, whereas the retarded ADE actually embeds a stronger, nonphysical, constraint: that all particles spend the same amount of every time increment immobile. Eulerian derivations of the retarded ADE thus silently commit the gambler's fallacy, leading them to ignore dispersion due to mass transfer that is correctly modeled by other approaches. We then present a particle tracking simulation illustrating how poor an approximation the retarded ADE may be, even when mobile and immobile plumes are continually near local equilibrium. We note that classic "LEA" (actually, retarded ADE validity) criteria test for insignificance of MIMT-driven dispersion relative to hydrodynamic dispersion, rather than for local equilibrium. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  11. Design and manufacture of Portland cement - application of sensitivity analysis in exploration and optimisation Part II. Optimisation

    DEFF Research Database (Denmark)

    Svinning, K.; Høskuldsson, Agnar

    2006-01-01

    A program for a model-based optimisation has been developed. The program contains two subprograms. The first one does minimising or maximising constrained by one original PLS-component or one equal to a combination of several. The second one does searching for the optimal combination of PLS-compo......-components, which gives max or min y. The program has proved to be applicable for achieving realistic results for implementation in the design of Portland cement with respect to performance and in the quality control during production....

  12. Modulation aware cluster size optimisation in wireless sensor networks

    Science.gov (United States)

    Sriram Naik, M.; Kumar, Vinay

    2017-07-01

    Wireless sensor networks (WSNs) play a great role because of their numerous advantages to the mankind. The main challenge with WSNs is the energy efficiency. In this paper, we have focused on the energy minimisation with the help of cluster size optimisation along with consideration of modulation effect when the nodes are not able to communicate using baseband communication technique. Cluster size optimisations is important technique to improve the performance of WSNs. It provides improvement in energy efficiency, network scalability, network lifetime and latency. We have proposed analytical expression for cluster size optimisation using traditional sensing model of nodes for square sensing field with consideration of modulation effects. Energy minimisation can be achieved by changing the modulation schemes such as BPSK, 16-QAM, QPSK, 64-QAM, etc., so we are considering the effect of different modulation techniques in the cluster formation. The nodes in the sensing fields are random and uniformly deployed. It is also observed that placement of base station at centre of scenario enables very less number of modulation schemes to work in energy efficient manner but when base station placed at the corner of the sensing field, it enable large number of modulation schemes to work in energy efficient manner.

  13. Automated Sperm Head Detection Using Intersecting Cortical Model Optimised by Particle Swarm Optimization.

    Science.gov (United States)

    Tan, Weng Chun; Mat Isa, Nor Ashidi

    2016-01-01

    In human sperm motility analysis, sperm segmentation plays an important role to determine the location of multiple sperms. To ensure an improved segmentation result, the Laplacian of Gaussian filter is implemented as a kernel in a pre-processing step before applying the image segmentation process to automatically segment and detect human spermatozoa. This study proposes an intersecting cortical model (ICM), which was derived from several visual cortex models, to segment the sperm head region. However, the proposed method suffered from parameter selection; thus, the ICM network is optimised using particle swarm optimization where feature mutual information is introduced as the new fitness function. The final results showed that the proposed method is more accurate and robust than four state-of-the-art segmentation methods. The proposed method resulted in rates of 98.14%, 98.82%, 86.46% and 99.81% in accuracy, sensitivity, specificity and precision, respectively, after testing with 1200 sperms. The proposed algorithm is expected to be implemented in analysing sperm motility because of the robustness and capability of this algorithm.

  14. Non-equilibrium dynamics from RPMD and CMD.

    Science.gov (United States)

    Welsch, Ralph; Song, Kai; Shi, Qiang; Althorpe, Stuart C; Miller, Thomas F

    2016-11-28

    We investigate the calculation of approximate non-equilibrium quantum time correlation functions (TCFs) using two popular path-integral-based molecular dynamics methods, ring-polymer molecular dynamics (RPMD) and centroid molecular dynamics (CMD). It is shown that for the cases of a sudden vertical excitation and an initial momentum impulse, both RPMD and CMD yield non-equilibrium TCFs for linear operators that are exact for high temperatures, in the t = 0 limit, and for harmonic potentials; the subset of these conditions that are preserved for non-equilibrium TCFs of non-linear operators is also discussed. Furthermore, it is shown that for these non-equilibrium initial conditions, both methods retain the connection to Matsubara dynamics that has previously been established for equilibrium initial conditions. Comparison of non-equilibrium TCFs from RPMD and CMD to Matsubara dynamics at short times reveals the orders in time to which the methods agree. Specifically, for the position-autocorrelation function associated with sudden vertical excitation, RPMD and CMD agree with Matsubara dynamics up to O(t 4 ) and O(t 1 ), respectively; for the position-autocorrelation function associated with an initial momentum impulse, RPMD and CMD agree with Matsubara dynamics up to O(t 5 ) and O(t 2 ), respectively. Numerical tests using model potentials for a wide range of non-equilibrium initial conditions show that RPMD and CMD yield non-equilibrium TCFs with an accuracy that is comparable to that for equilibrium TCFs. RPMD is also used to investigate excited-state proton transfer in a system-bath model, and it is compared to numerically exact calculations performed using a recently developed version of the Liouville space hierarchical equation of motion approach; again, similar accuracy is observed for non-equilibrium and equilibrium initial conditions.

  15. Atomistic-level non-equilibrium model for chemically reactive systems based on steepest-entropy-ascent quantum thermodynamics

    International Nuclear Information System (INIS)

    Li, Guanchen; Al-Abbasi, Omar; Von Spakovsky, Michael R

    2014-01-01

    This paper outlines an atomistic-level framework for modeling the non-equilibrium behavior of chemically reactive systems. The framework called steepest- entropy-ascent quantum thermodynamics (SEA-QT) is based on the paradigm of intrinsic quantum thermodynamic (IQT), which is a theory that unifies quantum mechanics and thermodynamics into a single discipline with wide applications to the study of non-equilibrium phenomena at the atomistic level. SEA-QT is a novel approach for describing the state of chemically reactive systems as well as the kinetic and dynamic features of the reaction process without any assumptions of near-equilibrium states or weak-interactions with a reservoir or bath. Entropy generation is the basis of the dissipation which takes place internal to the system and is, thus, the driving force of the chemical reaction(s). The SEA-QT non-equilibrium model is able to provide detailed information during the reaction process, providing a picture of the changes occurring in key thermodynamic properties (e.g., the instantaneous species concentrations, entropy and entropy generation, reaction coordinate, chemical affinities, reaction rate, etc). As an illustration, the SEA-QT framework is applied to an atomistic-level chemically reactive system governed by the reaction mechanism F + H 2 ↔ FH + H

  16. Regret Theory and Equilibrium Asset Prices

    Directory of Open Access Journals (Sweden)

    Jiliang Sheng

    2014-01-01

    Full Text Available Regret theory is a behavioral approach to decision making under uncertainty. In this paper we assume that there are two representative investors in a frictionless market, a representative active investor who selects his optimal portfolio based on regret theory and a representative passive investor who invests only in the benchmark portfolio. In a partial equilibrium setting, the objective of the representative active investor is modeled as minimization of the regret about final wealth relative to the benchmark portfolio. In equilibrium this optimal strategy gives rise to a behavioral asset priciting model. We show that the market beta and the benchmark beta that is related to the investor’s regret are the determinants of equilibrium asset prices. We also extend our model to a market with multibenchmark portfolios. Empirical tests using stock price data from Shanghai Stock Exchange show strong support to the asset pricing model based on regret theory.

  17. Optimisation modelling to assess cost of dietary improvement in remote Aboriginal Australia.

    Science.gov (United States)

    Brimblecombe, Julie; Ferguson, Megan; Liberato, Selma C; O'Dea, Kerin; Riley, Malcolm

    2013-01-01

    The cost and dietary choices required to fulfil nutrient recommendations defined nationally, need investigation, particularly for disadvantaged populations. We used optimisation modelling to examine the dietary change required to achieve nutrient requirements at minimum cost for an Aboriginal population in remote Australia, using where possible minimally-processed whole foods. A twelve month cross-section of population-level purchased food, food price and nutrient content data was used as the baseline. Relative amounts from 34 food group categories were varied to achieve specific energy and nutrient density goals at minimum cost while meeting model constraints intended to minimise deviation from the purchased diet. Simultaneous achievement of all nutrient goals was not feasible. The two most successful models (A & B) met all nutrient targets except sodium (146.2% and 148.9% of the respective target) and saturated fat (12.0% and 11.7% of energy). Model A was achieved with 3.2% lower cost than the baseline diet (which cost approximately AUD$13.01/person/day) and Model B at 7.8% lower cost but with a reduction in energy of 4.4%. Both models required very large reductions in sugar sweetened beverages (-90%) and refined cereals (-90%) and an approximate four-fold increase in vegetables, fruit, dairy foods, eggs, fish and seafood, and wholegrain cereals. This modelling approach suggested population level dietary recommendations at minimal cost based on the baseline purchased diet. Large shifts in diet in remote Aboriginal Australian populations are needed to achieve national nutrient targets. The modeling approach used was not able to meet all nutrient targets at less than current food expenditure.

  18. Optimisation modelling to assess cost of dietary improvement in remote Aboriginal Australia.

    Directory of Open Access Journals (Sweden)

    Julie Brimblecombe

    Full Text Available The cost and dietary choices required to fulfil nutrient recommendations defined nationally, need investigation, particularly for disadvantaged populations.We used optimisation modelling to examine the dietary change required to achieve nutrient requirements at minimum cost for an Aboriginal population in remote Australia, using where possible minimally-processed whole foods.A twelve month cross-section of population-level purchased food, food price and nutrient content data was used as the baseline. Relative amounts from 34 food group categories were varied to achieve specific energy and nutrient density goals at minimum cost while meeting model constraints intended to minimise deviation from the purchased diet.Simultaneous achievement of all nutrient goals was not feasible. The two most successful models (A & B met all nutrient targets except sodium (146.2% and 148.9% of the respective target and saturated fat (12.0% and 11.7% of energy. Model A was achieved with 3.2% lower cost than the baseline diet (which cost approximately AUD$13.01/person/day and Model B at 7.8% lower cost but with a reduction in energy of 4.4%. Both models required very large reductions in sugar sweetened beverages (-90% and refined cereals (-90% and an approximate four-fold increase in vegetables, fruit, dairy foods, eggs, fish and seafood, and wholegrain cereals.This modelling approach suggested population level dietary recommendations at minimal cost based on the baseline purchased diet. Large shifts in diet in remote Aboriginal Australian populations are needed to achieve national nutrient targets. The modeling approach used was not able to meet all nutrient targets at less than current food expenditure.

  19. Value Chain Optimisation of Biogas Production

    DEFF Research Database (Denmark)

    Jensen, Ida Græsted

    economically feasible. In this PhD thesis, the focus is to create models for investigating the profitability of biogas projects by: 1) including the whole value chain in a mathematical model and considering mass and energy changes on the upstream part of the chain; and 2) including profit allocation in a value......, the costs on the biogas plant has been included in the model using economy of scale. For the second point, a mathematical model considering profit allocation was developed applying three allocation mechanisms. This mathematical model can be applied as a second step after the value chain optimisation. After...... in the energy systems model to find the optimal end use of each type of gas and fuel. The main contributions of this thesis are the methods developed on plant level. Both the mathematical model for the value chain and the profit allocation model can be generalised and used in other industries where mass...

  20. Sudden transition from equilibrium stability to chaotic dynamics in a cautious tâtonnement model

    International Nuclear Information System (INIS)

    Foroni, Ilaria; Avellone, Alessandro; Panchuk, Anastasiia

    2015-01-01

    Tâtonnement processes are usually interpreted as auctions, where a fictitious agent sets the prices until an equilibrium is reached and the trades are made. The main purpose of such processes is to explain how an economy comes to its equilibrium. It is well known that discrete time price adjustment processes may fail to converge and may exhibit periodic or even chaotic behavior. To avoid large price changes, a version of the discrete time tâtonnement process for reaching an equilibrium in a pure exchange economy based on a cautious updating of the prices has been proposed two decades ago. This modification leads to a one dimensional bimodal piecewise smooth map, for which we show analytically that degenerate bifurcations and border collision bifurcations play a fundamental role for the asymptotic behavior of the model.

  1. Thermodynamic Modeling and Optimization of the Copper Flash Converting Process Using the Equilibrium Constant Method

    Science.gov (United States)

    Li, Ming-zhou; Zhou, Jie-min; Tong, Chang-ren; Zhang, Wen-hai; Chen, Zhuo; Wang, Jin-liang

    2018-05-01

    Based on the principle of multiphase equilibrium, a mathematical model of the copper flash converting process was established by the equilibrium constant method, and a computational system was developed with the use of MetCal software platform. The mathematical model was validated by comparing simulated outputs, industrial data, and published data. To obtain high-quality blister copper, a low copper content in slag, and increased impurity removal rate, the model was then applied to investigate the effects of the operational parameters [oxygen/feed ratio (R OF), flux rate (R F), and converting temperature (T)] on the product weights, compositions, and the distribution behaviors of impurity elements. The optimized results showed that R OF, R F, and T should be controlled at approximately 156 Nm3/t, within 3.0 pct, and at approximately 1523 K (1250 °C), respectively.

  2. Thermodynamic chemical energy transfer mechanisms of non-equilibrium, quasi-equilibrium, and equilibrium chemical reactions

    International Nuclear Information System (INIS)

    Roh, Heui-Seol

    2015-01-01

    Chemical energy transfer mechanisms at finite temperature are explored by a chemical energy transfer theory which is capable of investigating various chemical mechanisms of non-equilibrium, quasi-equilibrium, and equilibrium. Gibbs energy fluxes are obtained as a function of chemical potential, time, and displacement. Diffusion, convection, internal convection, and internal equilibrium chemical energy fluxes are demonstrated. The theory reveals that there are chemical energy flux gaps and broken discrete symmetries at the activation chemical potential, time, and displacement. The statistical, thermodynamic theory is the unification of diffusion and internal convection chemical reactions which reduces to the non-equilibrium generalization beyond the quasi-equilibrium theories of migration and diffusion processes. The relationship between kinetic theories of chemical and electrochemical reactions is also explored. The theory is applied to explore non-equilibrium chemical reactions as an illustration. Three variable separation constants indicate particle number constants and play key roles in describing the distinct chemical reaction mechanisms. The kinetics of chemical energy transfer accounts for the four control mechanisms of chemical reactions such as activation, concentration, transition, and film chemical reactions. - Highlights: • Chemical energy transfer theory is proposed for non-, quasi-, and equilibrium. • Gibbs energy fluxes are expressed by chemical potential, time, and displacement. • Relationship between chemical and electrochemical reactions is discussed. • Theory is applied to explore nonequilibrium energy transfer in chemical reactions. • Kinetics of non-equilibrium chemical reactions shows the four control mechanisms

  3. Computation of Phase Equilibrium and Phase Envelopes

    DEFF Research Database (Denmark)

    Ritschel, Tobias Kasper Skovborg; Jørgensen, John Bagterp

    formulate the involved equations in terms of the fugacity coefficients. We present expressions for the first-order derivatives. Such derivatives are necessary in computationally efficient gradient-based methods for solving the vapor-liquid equilibrium equations and for computing phase envelopes. Finally, we......In this technical report, we describe the computation of phase equilibrium and phase envelopes based on expressions for the fugacity coefficients. We derive those expressions from the residual Gibbs energy. We consider 1) ideal gases and liquids modeled with correlations from the DIPPR database...... and 2) nonideal gases and liquids modeled with cubic equations of state. Next, we derive the equilibrium conditions for an isothermal-isobaric (constant temperature, constant pressure) vapor-liquid equilibrium process (PT flash), and we present a method for the computation of phase envelopes. We...

  4. Ordering phenomena and non-equilibrium properties of lattice gas models

    International Nuclear Information System (INIS)

    Fiig, T.

    1994-03-01

    This report falls within the general field of ordering processes and non-equilibrium properties of lattice gas models. The theory of diffuse scattering of lattice gas models originating from a random distribution of clusters is considered. We obtain relations between the diffuse part of the structure factor S dif (q), the correlation function C(r), and the size distribution of clusters D(n). For a number of distributions we calculate S dif (q) exactly in one dimension, and discuss the possibility for a Lorentzian and a Lorentzian square lineshape to arise. We discuss the two- and three-dimensional oxygen ordering processes in the high T c superconductor YBa 2 Cu 3 O 6+x based on a simple anisotropic lattice gas model. We calculate the structural phase diagram by Monte Carlo simulation and compared the results with experimental data. The structure factor of the oxygen ordering properties has been calculated in both two and three dimensions by Monte Carlo simulation. We report on results obtained from large scale computations on the Connection Machine, which are in excellent agreement with recent neutron diffraction data. In addition we consider the effect of the diffusive motion of metal-ion dopants on the oxygen ordering properties on YBa 2 Cu 3 O 6+x . The stationary properties of metastability in long-range interaction models are studied by application of a constrained transfer matrix (CTM) formalism. The model considered, which exhibits several metastable states, is an extension of the Blume Capel model to include weak long-range interactions. We show, that the decay rate of the metastable states is closely related to the imaginary part of the equilibrium free-energy density obtained from the CTM formalism. We discuss a class of lattice gas model for dissipative transport in the framework of a Langevin description, which is capable of producing power law spectra for the density fluctuations. We compare with numerical results obtained from simulations of a

  5. Optimisation of the image resolution of a positron emission tomograph

    International Nuclear Information System (INIS)

    Ziemons, K.

    1993-10-01

    The resolution and the respective signal-to-noise ratios of reconstructed pictures were a point of main interest of the work for optimisation of PET systems. Monte-Carlo modelling calculations were applied to derive possible improvements of the technical design or performance of the PET system. (DG) [de

  6. Equilibrium modeling of mono and binary sorption of Cu(II and Zn(II onto chitosan gel beads

    Directory of Open Access Journals (Sweden)

    Nastaj Józef

    2016-12-01

    Full Text Available The objective of the work are in-depth experimental studies of Cu(II and Zn(II ion removal on chitosan gel beads from both one- and two-component water solutions at the temperature of 303 K. The optimal process conditions such as: pH value, dose of sorbent and contact time were determined. Based on the optimal process conditions, equilibrium and kinetic studies were carried out. The maximum sorption capacities equaled: 191.25 mg/g and 142.88 mg/g for Cu(II and Zn(II ions respectively, when the sorbent dose was 10 g/L and the pH of a solution was 5.0 for both heavy metal ions. One-component sorption equilibrium data were successfully presented for six of the most useful three-parameter equilibrium models: Langmuir-Freundlich, Redlich-Peterson, Sips, Koble-Corrigan, Hill and Toth. Extended forms of Langmuir-Freundlich, Koble-Corrigan and Sips models were also well fitted to the two-component equilibrium data obtained for different ratios of concentrations of Cu(II and Zn(II ions (1:1, 1:2, 2:1. Experimental sorption data were described by two kinetic models of the pseudo-first and pseudo-second order. Furthermore, an attempt to explain the mechanisms of the divalent metal ion sorption process on chitosan gel beads was undertaken.

  7. Tokamak equilibrium reconstruction code LIUQE and its real time implementation

    International Nuclear Information System (INIS)

    Moret, J.-M.; Duval, B.P.; Le, H.B.; Coda, S.; Felici, F.; Reimerdes, H.

    2015-01-01

    Highlights: • Algorithm vertical stabilisation using a linear parametrisation of the current density. • Experimentally derived model of the vacuum vessel to account for vessel currents. • Real-time contouring algorithm for flux surface averaged 1.5 D transport equations. • Full real time implementation coded in SIMULINK runs in less than 200 μs. • Applications: shape control, safety factor profile control, coupling with RAPTOR. - Abstract: Equilibrium reconstruction consists in identifying, from experimental measurements, a distribution of the plasma current density that satisfies the pressure balance constraint. The LIUQE code adopts a computationally efficient method to solve this problem, based on an iterative solution of the Poisson equation coupled with a linear parametrisation of the plasma current density. This algorithm is unstable against vertical gross motion of the plasma column for elongated shapes and its application to highly shaped plasmas on TCV requires a particular treatment of this instability. TCV's continuous vacuum vessel has a low resistance designed to enhance passive stabilisation of the vertical position. The eddy currents in the vacuum vessel have a sizeable influence on the equilibrium reconstruction and must be taken into account. A real time version of LIUQE has been implemented on TCV's distributed digital control system with a cycle time shorter than 200 μs for a full spatial grid of 28 by 65, using all 133 experimental measurements and including the flux surface average of quantities necessary for the real time solution of 1.5 D transport equations. This performance was achieved through a thoughtful choice of numerical methods and code optimisation techniques at every step of the algorithm, and was coded in MATLAB and SIMULINK for the off-line and real time version respectively

  8. Optimisation of the alcoholic fermentation of aqueous jerivá pulp extract

    Directory of Open Access Journals (Sweden)

    Guilherme Arielo Rodrigues Maia

    2014-05-01

    Full Text Available The objective of this research is to determinate the optimum conditions for the alcoholic fermentation process of aqueous jerivá pulp extract using the response surface methodology and simplex optimisation technique. The incomplete factorial design 3³ was applied with the yeast extract, NH4H2PO4 and yeast as the independent variables and the alcohol production yield as the response. The regression analysis indicated that the model is predictive, and the simplex optimisation generated a formulation containing 0.35 g L-1 yeast extract, 6.33 g L-1 yeast and 0.30 g L-1NH4H2PO4 for an optimum yield of 85.40% ethanol. To validate the predictive equation, the experiment was carried out in triplicate under optimum conditions, and an average yield of 87.15% was obtained. According to a t-test, no significant difference was observed (on the order of 5% between the average value obtained and the value indicated by the simplex optimisation technique.

  9. Automatic optimisation of gamma dose rate sensor networks: The DETECT Optimisation Tool

    DEFF Research Database (Denmark)

    Helle, K.B.; Müller, T.O.; Astrup, Poul

    2014-01-01

    of the EU FP 7 project DETECT. It evaluates the gamma dose rates that a proposed set of sensors might measure in an emergency and uses this information to optimise the sensor locations. The gamma dose rates are taken from a comprehensive library of simulations of atmospheric radioactive plumes from 64......Fast delivery of comprehensive information on the radiological situation is essential for decision-making in nuclear emergencies. Most national radiological agencies in Europe employ gamma dose rate sensor networks to monitor radioactive pollution of the atmosphere. Sensor locations were often...... source locations. These simulations cover the whole European Union, so the DOT allows evaluation and optimisation of sensor networks for all EU countries, as well as evaluation of fencing sensors around possible sources. Users can choose from seven cost functions to evaluate the capability of a given...

  10. A facilitated diffusion model constrained by the probability isotherm: a pedagogical exercise in intuitive non-equilibrium thermodynamics.

    Science.gov (United States)

    Chapman, Brian

    2017-06-01

    This paper seeks to develop a more thermodynamically sound pedagogy for students of biological transport than is currently available from either of the competing schools of linear non-equilibrium thermodynamics (LNET) or Michaelis-Menten kinetics (MMK). To this end, a minimal model of facilitated diffusion was constructed comprising four reversible steps: cis- substrate binding, cis → trans bound enzyme shuttling, trans -substrate dissociation and trans → cis free enzyme shuttling. All model parameters were subject to the second law constraint of the probability isotherm, which determined the unidirectional and net rates for each step and for the overall reaction through the law of mass action. Rapid equilibration scenarios require sensitive 'tuning' of the thermodynamic binding parameters to the equilibrium substrate concentration. All non-equilibrium scenarios show sigmoidal force-flux relations, with only a minority of cases having their quasi -linear portions close to equilibrium. Few cases fulfil the expectations of MMK relating reaction rates to enzyme saturation. This new approach illuminates and extends the concept of rate-limiting steps by focusing on the free energy dissipation associated with each reaction step and thereby deducing its respective relative chemical impedance. The crucial importance of an enzyme's being thermodynamically 'tuned' to its particular task, dependent on the cis- and trans- substrate concentrations with which it deals, is consistent with the occurrence of numerous isoforms for enzymes that transport a given substrate in physiologically different circumstances. This approach to kinetic modelling, being aligned with neither MMK nor LNET, is best described as intuitive non-equilibrium thermodynamics, and is recommended as a useful adjunct to the design and interpretation of experiments in biotransport.

  11. Modelling of Equilibrium Between Mantle and Core: Refractory, Volatile, and Highly Siderophile Elements

    Science.gov (United States)

    Righter, K.; Danielson, L.; Pando, K.; Shofner, G.; Lee, C. -T.

    2013-01-01

    Siderophile elements have been used to constrain conditions of core formation and differentiation for the Earth, Mars and other differentiated bodies [1]. Recent models for the Earth have concluded that the mantle and core did not fully equilibrate and the siderophile element contents of the mantle can only be explained under conditions where the oxygen fugacity changes from low to high during accretion and the mantle and core do not fully equilibrate [2,3]. However these conclusions go against several physical and chemical constraints. First, calculations suggest that even with the composition of accreting material changing from reduced to oxidized over time, the fO2 defined by metal-silicate equilibrium does not change substantially, only by approximately 1 logfO2 unit [4]. An increase of more than 2 logfO2 units in mantle oxidation are required in models of [2,3]. Secondly, calculations also show that metallic impacting material will become deformed and sheared during accretion to a large body, such that it becomes emulsified to a fine scale that allows equilibrium at nearly all conditions except for possibly the length scale for giant impacts [5] (contrary to conclusions of [6]). Using new data for D(Mo) metal/silicate at high pressures, together with updated partitioning expressions for many other elements, we will show that metal-silicate equilibrium across a long span of Earth s accretion history may explain the concentrations of many siderophile elements in Earth's mantle. The modeling includes refractory elements Ni, Co, Mo, and W, as well as highly siderophile elements Au, Pd and Pt, and volatile elements Cd, In, Bi, Sb, Ge and As.

  12. Non-Equilibrium Properties from Equilibrium Free Energy Calculations

    Science.gov (United States)

    Pohorille, Andrew; Wilson, Michael A.

    2012-01-01

    Calculating free energy in computer simulations is of central importance in statistical mechanics of condensed media and its applications to chemistry and biology not only because it is the most comprehensive and informative quantity that characterizes the eqUilibrium state, but also because it often provides an efficient route to access dynamic and kinetic properties of a system. Most of applications of equilibrium free energy calculations to non-equilibrium processes rely on a description in which a molecule or an ion diffuses in the potential of mean force. In general case this description is a simplification, but it might be satisfactorily accurate in many instances of practical interest. This hypothesis has been tested in the example of the electrodiffusion equation . Conductance of model ion channels has been calculated directly through counting the number of ion crossing events observed during long molecular dynamics simulations and has been compared with the conductance obtained from solving the generalized Nernst-Plank equation. It has been shown that under relatively modest conditions the agreement between these two approaches is excellent, thus demonstrating the assumptions underlying the diffusion equation are fulfilled. Under these conditions the electrodiffusion equation provides an efficient approach to calculating the full voltage-current dependence routinely measured in electrophysiological experiments.

  13. THE ABUNDANCE OF MOLECULAR HYDROGEN AND ITS CORRELATION WITH MIDPLANE PRESSURE IN GALAXIES: NON-EQUILIBRIUM, TURBULENT, CHEMICAL MODELS

    International Nuclear Information System (INIS)

    Mac Low, Mordecai-Mark; Glover, Simon C. O.

    2012-01-01

    Observations of spiral galaxies show a strong linear correlation between the ratio of molecular to atomic hydrogen surface density R mol and midplane pressure. To explain this, we simulate three-dimensional, magnetized turbulence, including simplified treatments of non-equilibrium chemistry and the propagation of dissociating radiation, to follow the formation of H 2 from cold atomic gas. The formation timescale for H 2 is sufficiently long that equilibrium is not reached within the 20-30 Myr lifetimes of molecular clouds. The equilibrium balance between radiative dissociation and H 2 formation on dust grains fails to predict the time-dependent molecular fractions we find. A simple, time-dependent model of H 2 formation can reproduce the gross behavior, although turbulent density perturbations increase molecular fractions by a factor of few above it. In contradiction to equilibrium models, radiative dissociation of molecules plays little role in our model for diffuse radiation fields with strengths less than 10 times that of the solar neighborhood, because of the effective self-shielding of H 2 . The observed correlation of R mol with pressure corresponds to a correlation with local gas density if the effective temperature in the cold neutral medium of galactic disks is roughly constant. We indeed find such a correlation of R mol with density. If we examine the value of R mol in our local models after a free-fall time at their average density, as expected for models of molecular cloud formation by large-scale gravitational instability, our models reproduce the observed correlation over more than an order-of-magnitude range in density.

  14. The Abundance of Molecular Hydrogen and Its Correlation with Midplane Pressure in Galaxies: Non-equilibrium, Turbulent, Chemical Models

    Science.gov (United States)

    Mac Low, Mordecai-Mark; Glover, Simon C. O.

    2012-02-01

    Observations of spiral galaxies show a strong linear correlation between the ratio of molecular to atomic hydrogen surface density R mol and midplane pressure. To explain this, we simulate three-dimensional, magnetized turbulence, including simplified treatments of non-equilibrium chemistry and the propagation of dissociating radiation, to follow the formation of H2 from cold atomic gas. The formation timescale for H2 is sufficiently long that equilibrium is not reached within the 20-30 Myr lifetimes of molecular clouds. The equilibrium balance between radiative dissociation and H2 formation on dust grains fails to predict the time-dependent molecular fractions we find. A simple, time-dependent model of H2 formation can reproduce the gross behavior, although turbulent density perturbations increase molecular fractions by a factor of few above it. In contradiction to equilibrium models, radiative dissociation of molecules plays little role in our model for diffuse radiation fields with strengths less than 10 times that of the solar neighborhood, because of the effective self-shielding of H2. The observed correlation of R mol with pressure corresponds to a correlation with local gas density if the effective temperature in the cold neutral medium of galactic disks is roughly constant. We indeed find such a correlation of R mol with density. If we examine the value of R mol in our local models after a free-fall time at their average density, as expected for models of molecular cloud formation by large-scale gravitational instability, our models reproduce the observed correlation over more than an order-of-magnitude range in density.

  15. Social security as Markov equilibrium in OLG models: A note

    DEFF Research Database (Denmark)

    Gonzalez Eiras, Martin

    2011-01-01

    I refine and extend the Markov perfect equilibrium of the social security policy game in Forni (2005) for the special case of logarithmic utility. Under the restriction that the policy function be continuous, instead of differentiable, the equilibrium is globally well defined and its dynamics...

  16. A tightly coupled non-equilibrium model for inductively coupled radio-frequency plasmas

    International Nuclear Information System (INIS)

    Munafò, A.; Alfuhaid, S. A.; Panesi, M.; Cambier, J.-L.

    2015-01-01

    The objective of the present work is the development of a tightly coupled magneto-hydrodynamic model for inductively coupled radio-frequency plasmas. Non Local Thermodynamic Equilibrium (NLTE) effects are described based on a hybrid State-to-State approach. A multi-temperature formulation is used to account for thermal non-equilibrium between translation of heavy-particles and vibration of molecules. Excited electronic states of atoms are instead treated as separate pseudo-species, allowing for non-Boltzmann distributions of their populations. Free-electrons are assumed Maxwellian at their own temperature. The governing equations for the electro-magnetic field and the gas properties (e.g., chemical composition and temperatures) are written as a coupled system of time-dependent conservation laws. Steady-state solutions are obtained by means of an implicit Finite Volume method. The results obtained in both LTE and NLTE conditions over a broad spectrum of operating conditions demonstrate the robustness of the proposed coupled numerical method. The analysis of chemical composition and temperature distributions along the torch radius shows that: (i) the use of the LTE assumption may lead to an inaccurate prediction of the thermo-chemical state of the gas, and (ii) non-equilibrium phenomena play a significant role close the walls, due to the combined effects of Ohmic heating and macroscopic gradients

  17. Spatial-structural interaction and strain energy structural optimisation

    NARCIS (Netherlands)

    Hofmeyer, H.; Davila Delgado, J.M.; Borrmann, A.; Geyer, P.; Rafiq, Y.; Wilde, de P.

    2012-01-01

    A research engine iteratively transforms spatial designs into structural designs and vice versa. Furthermore, spatial and structural designs are optimised. It is suggested to optimise a structural design by evaluating the strain energy of its elements and by then removing, adding, or changing the

  18. Integrated environmental assessment of future energy scenarios based on economic equilibrium models

    International Nuclear Information System (INIS)

    Igos, E.; Rugani, B.; Rege, S.; Benetto, E.; Drouet, L.; Zachary, D.; Haas, T.

    2014-01-01

    The future evolution of energy supply technologies strongly depends on (and affects) the economic and environmental systems, due to the high dependency of this sector on the availability and cost of fossil fuels, especially on the small regional scale. This paper aims at presenting the modeling system and preliminary results of a research project conducted on the scale of Luxembourg to assess the environmental impact of future energy scenarios for the country, integrating outputs from partial and computable general equilibrium models within hybrid Life Cycle Assessment (LCA) frameworks. The general equilibrium model for Luxembourg, LUXGEM, is used to evaluate the economic impacts of policy decisions and other economic shocks over the time horizon 2006-2030. A techno-economic (partial equilibrium) model for Luxembourg, ETEM, is used instead to compute operation levels of various technologies to meet the demand for energy services at the least cost along the same timeline. The future energy demand and supply are made consistent by coupling ETEM with LUXGEM so as to have the same macro-economic variables and energy shares driving both models. The coupling results are then implemented within a set of Environmentally-Extended Input-Output (EE-IO) models in historical time series to test the feasibility of the integrated framework and then to assess the environmental impacts of the country. Accordingly, a dis-aggregated energy sector was built with the different ETEM technologies in the EE-IO to allow hybridization with Life Cycle Inventory (LCI) and enrich the process detail. The results show that the environmental impact slightly decreased overall from 2006 to 2009. Most of the impacts come from some imported commodities (natural gas, used to produce electricity, and metalliferous ores and metal scrap). The main energy production technology is the combined-cycle gas turbine plant 'Twinerg', representing almost 80% of the domestic electricity production in Luxembourg

  19. Energy Savings from Optimised In-Field Route Planning for Agricultural Machinery

    Directory of Open Access Journals (Sweden)

    Efthymios Rodias

    2017-10-01

    Full Text Available Various types of sensors technologies, such as machine vision and global positioning system (GPS have been implemented in navigation of agricultural vehicles. Automated navigation systems have proved the potential for the execution of optimised route plans for field area coverage. This paper presents an assessment of the reduction of the energy requirements derived from the implementation of optimised field area coverage planning. The assessment regards the analysis of the energy requirements and the comparison between the non-optimised and optimised plans for field area coverage in the whole sequence of operations required in two different cropping systems: Miscanthus and Switchgrass production. An algorithmic approach for the simulation of the executed field operations by following both non-optimised and optimised field-work patterns was developed. As a result, the corresponding time requirements were estimated as the basis of the subsequent energy cost analysis. Based on the results, the optimised routes reduce the fuel energy consumption up to 8%, the embodied energy consumption up to 7%, and the total energy consumption from 3% up to 8%.

  20. Kinetics and equilibrium modelling of lead uptake by algae Gelidium and algal waste from agar extraction industry.

    Science.gov (United States)

    Vilar, Vítor J P; Botelho, Cidália M S; Boaventura, Rui A R

    2007-05-08

    Pb(II) biosorption onto algae Gelidium, algal waste from agar extraction industry and a composite material was studied. Discrete and continuous site distribution models were used to describe the biosorption equilibrium at different pH (5.3, 4 and 3), considering competition among Pb(II) ions and protons. The affinity distribution function of Pb(II) on the active sites was calculated by the Sips distribution. The Langmuir equilibrium constant was compared with the apparent affinity calculated by the discrete model, showing higher affinity for lead ions at higher pH values. Kinetic experiments were conducted at initial Pb(II) concentrations of 29-104 mgl(-1) and data fitted to pseudo-first Lagergren and second-order models. The adsorptive behaviour of biosorbent particles was modelled using a batch mass transfer kinetic model, which successfully predicts Pb(II) concentration profiles at different initial lead concentration and pH, and provides significant insights on the biosorbents performance. Average values of homogeneous diffusivity, D(h), are 3.6 x 10(-8); 6.1 x 10(-8) and 2.4 x 10(-8)cm(2)s(-1), respectively, for Gelidium, algal waste and composite material. The concentration of lead inside biosorbent particles follows a parabolic profile that becomes linear near equilibrium.

  1. Kinetics and equilibrium modelling of lead uptake by algae Gelidium and algal waste from agar extraction industry

    International Nuclear Information System (INIS)

    Vilar, Vitor J.P.; Botelho, Cidalia M.S.; Boaventura, Rui A.R.

    2007-01-01

    Pb(II) biosorption onto algae Gelidium, algal waste from agar extraction industry and a composite material was studied. Discrete and continuous site distribution models were used to describe the biosorption equilibrium at different pH (5.3, 4 and 3), considering competition among Pb(II) ions and protons. The affinity distribution function of Pb(II) on the active sites was calculated by the Sips distribution. The Langmuir equilibrium constant was compared with the apparent affinity calculated by the discrete model, showing higher affinity for lead ions at higher pH values. Kinetic experiments were conducted at initial Pb(II) concentrations of 29-104 mg l -1 and data fitted to pseudo-first Lagergren and second-order models. The adsorptive behaviour of biosorbent particles was modelled using a batch mass transfer kinetic model, which successfully predicts Pb(II) concentration profiles at different initial lead concentration and pH, and provides significant insights on the biosorbents performance. Average values of homogeneous diffusivity, D h , are 3.6 x 10 -8 ; 6.1 x 10 -8 and 2.4 x 10 -8 cm 2 s -1 , respectively, for Gelidium, algal waste and composite material. The concentration of lead inside biosorbent particles follows a parabolic profile that becomes linear near equilibrium

  2. Kinetics and equilibrium modelling of lead uptake by algae Gelidium and algal waste from agar extraction industry

    Energy Technology Data Exchange (ETDEWEB)

    Vilar, Vitor J.P. [Laboratory of Separation and Reaction Engineering (LSRE), Departamento de Engenharia Quimica, Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto (Portugal); Botelho, Cidalia M.S. [Laboratory of Separation and Reaction Engineering (LSRE), Departamento de Engenharia Quimica, Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto (Portugal); Boaventura, Rui A.R. [Laboratory of Separation and Reaction Engineering (LSRE), Departamento de Engenharia Quimica, Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto (Portugal)]. E-mail: bventura@fe.up.pt

    2007-05-08

    Pb(II) biosorption onto algae Gelidium, algal waste from agar extraction industry and a composite material was studied. Discrete and continuous site distribution models were used to describe the biosorption equilibrium at different pH (5.3, 4 and 3), considering competition among Pb(II) ions and protons. The affinity distribution function of Pb(II) on the active sites was calculated by the Sips distribution. The Langmuir equilibrium constant was compared with the apparent affinity calculated by the discrete model, showing higher affinity for lead ions at higher pH values. Kinetic experiments were conducted at initial Pb(II) concentrations of 29-104 mg l{sup -1} and data fitted to pseudo-first Lagergren and second-order models. The adsorptive behaviour of biosorbent particles was modelled using a batch mass transfer kinetic model, which successfully predicts Pb(II) concentration profiles at different initial lead concentration and pH, and provides significant insights on the biosorbents performance. Average values of homogeneous diffusivity, D {sub h}, are 3.6 x 10{sup -8}; 6.1 x 10{sup -8} and 2.4 x 10{sup -8} cm{sup 2} s{sup -1}, respectively, for Gelidium, algal waste and composite material. The concentration of lead inside biosorbent particles follows a parabolic profile that becomes linear near equilibrium.

  3. Optimisation of radiation protection

    International Nuclear Information System (INIS)

    1988-01-01

    Optimisation of radiation protection is one of the key elements in the current radiation protection philosophy. The present system of dose limitation was issued in 1977 by the International Commission on Radiological Protection (ICRP) and includes, in addition to the requirements of justification of practices and limitation of individual doses, the requirement that all exposures be kept as low as is reasonably achievable, taking social and economic factors into account. This last principle is usually referred to as optimisation of radiation protection, or the ALARA principle. The NEA Committee on Radiation Protection and Public Health (CRPPH) organised an ad hoc meeting, in liaison with the NEA committees on the safety of nuclear installations and radioactive waste management. Separate abstracts were prepared for individual papers presented at the meeting

  4. Statistical optimisation techniques in fatigue signal editing problem

    International Nuclear Information System (INIS)

    Nopiah, Z. M.; Osman, M. H.; Baharin, N.; Abdullah, S.

    2015-01-01

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection

  5. Module detection in complex networks using integer optimisation

    Directory of Open Access Journals (Sweden)

    Tsoka Sophia

    2010-11-01

    Full Text Available Abstract Background The detection of modules or community structure is widely used to reveal the underlying properties of complex networks in biology, as well as physical and social sciences. Since the adoption of modularity as a measure of network topological properties, several methodologies for the discovery of community structure based on modularity maximisation have been developed. However, satisfactory partitions of large graphs with modest computational resources are particularly challenging due to the NP-hard nature of the related optimisation problem. Furthermore, it has been suggested that optimising the modularity metric can reach a resolution limit whereby the algorithm fails to detect smaller communities than a specific size in large networks. Results We present a novel solution approach to identify community structure in large complex networks and address resolution limitations in module detection. The proposed algorithm employs modularity to express network community structure and it is based on mixed integer optimisation models. The solution procedure is extended through an iterative procedure to diminish effects that tend to agglomerate smaller modules (resolution limitations. Conclusions A comprehensive comparative analysis of methodologies for module detection based on modularity maximisation shows that our approach outperforms previously reported methods. Furthermore, in contrast to previous reports, we propose a strategy to handle resolution limitations in modularity maximisation. Overall, we illustrate ways to improve existing methodologies for community structure identification so as to increase its efficiency and applicability.

  6. Statistical optimisation techniques in fatigue signal editing problem

    Energy Technology Data Exchange (ETDEWEB)

    Nopiah, Z. M.; Osman, M. H. [Fundamental Engineering Studies Unit Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 UKM (Malaysia); Baharin, N.; Abdullah, S. [Department of Mechanical and Materials Engineering Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 UKM (Malaysia)

    2015-02-03

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.

  7. Analysis and optimisation of heterogeneous real-time embedded systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2005-01-01

    . The success of such new design methods depends on the availability of analysis and optimisation techniques. Analysis and optimisation techniques for heterogeneous real-time embedded systems are presented in the paper. The authors address in more detail a particular class of such systems called multi...... of application messages to frames. Optimisation heuristics for frame packing aimed at producing a schedulable system are presented. Extensive experiments and a real-life example show the efficiency of the frame-packing approach....

  8. Analysis and optimisation of heterogeneous real-time embedded systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2006-01-01

    . The success of such new design methods depends on the availability of analysis and optimisation techniques. Analysis and optimisation techniques for heterogeneous real-time embedded systems are presented in the paper. The authors address in more detail a particular class of such systems called multi...... of application messages to frames. Optimisation heuristics for frame packing aimed at producing a schedulable system are presented. Extensive experiments and a real-life example show the efficiency of the frame-packing approach....

  9. Optimising Signalised Intersection Using Wireless Vehicle Detectors

    DEFF Research Database (Denmark)

    Adjin, Daniel Michael Okwabi; Torkudzor, Moses; Asare, Jack

    Traffic congestion on roads wastes travel times. In this paper, we developed a vehicular traffic model to optimise a signalised intersection in Accra, using wireless vehicle detectors. Traffic volume gathered was extrapolated to cover 2011 and 2016 and were analysed to obtain the peak hour traffic...... volume causing congestion. The intersection was modelled and simulated in Synchro7 as an actuated signalised model using results from the analysed data. The model for morning peak periods gave optimal cycle lengths of 100s and 150s with corresponding intersection delay of 48.9s and 90.6s in 2011 and 2016...... respectively while that for the evening was 55s giving delay of 14.2s and 16.3s respectively. It is shown that the model will improve traffic flow at the intersection....

  10. 30th International School of Mathematics "G Stampacchia" : Equilibrium Problems and Variational Models "Ettore Majorana"

    CERN Document Server

    Giannessi, Franco; Maugeri, Antonino; Equilibrium Problems and Variational Models

    2000-01-01

    The volume, devoted to variational analysis and its applications, collects selected and refereed contributions, which provide an outline of the field. The meeting of the title "Equilibrium Problems and Variational Models", which was held in Erice (Sicily) in the period June 23 - July 2 2000, was the occasion of the presentation of some of these papers; other results are a consequence of a fruitful and constructive atmosphere created during the meeting. New results, which enlarge the field of application of variational analysis, are presented in the book; they deal with the vectorial analysis, time dependent variational analysis, exact penalization, high order deriva­ tives, geometric aspects, distance functions and log-quadratic proximal methodology. The new theoretical results allow one to improve in a remarkable way the study of significant problems arising from the applied sciences, as continuum model of transportation, unilateral problems, multicriteria spatial price models, network equilibrium...

  11. Isogeometric Analysis and Shape Optimisation

    DEFF Research Database (Denmark)

    Gravesen, Jens; Evgrafov, Anton; Gersborg, Allan Roulund

    of the whole domain. So in every optimisation cycle we need to extend a parametrisation of the boundary of a domain to the whole domain. It has to be fast in order not to slow the optimisation down but it also has to be robust and give a parametrisation of high quality. These are conflicting requirements so we...... will explain how the validity of a parametrisation can be checked and we will describe various ways to parametrise a domain. We will in particular study the Winslow functional which turns out to have some desirable properties. Other problems we touch upon is clustering of boundary control points (design...

  12. Damage-spreading and out-of-equilibrium dynamics in the low-temperature regime of the two-dimensional ± J Edwards–Anderson model

    International Nuclear Information System (INIS)

    Rubio Puzzo, M L; Romá, F; Bustingorry, S; Gleiser, P M

    2010-01-01

    We present results showing the correlation between the out-of-equilibrium dynamics and the equilibrium damage-spreading process in the two-dimensional ± J Edwards–Anderson model at low temperatures. A key ingredient in our analysis is the projection of finite temperature spin configurations onto the ground state topology of the system. In particular, through numerical simulations we correlate ground state information with the out-of-equilibrium dynamics. We also analyse how the propagation of a small perturbation in equilibrated systems is related to the ground state topology. This damage-spreading study unveils the presence of rigid clusters of spins. We claim that these clusters give rise to the slow out-of-equilibrium dynamics observed in the temperature range between the glass temperature T g = 0 of the two-dimensional ± J Edwards–Anderson model and the critical temperature T c of the pure ferromagnetic Ising model

  13. Optimisation-based worst-case analysis and anti-windup synthesis for uncertain nonlinear systems

    Science.gov (United States)

    Menon, Prathyush Purushothama

    This thesis describes the development and application of optimisation-based methods for worst-case analysis and anti-windup synthesis for uncertain nonlinear systems. The worst-case analysis methods developed in the thesis are applied to the problem of nonlinear flight control law clearance for highly augmented aircraft. Local, global and hybrid optimisation algorithms are employed to evaluate worst-case violations of a nonlinear response clearance criterion, for a highly realistic aircraft simulation model and flight control law. The reliability and computational overheads associated with different opti misation algorithms are compared, and the capability of optimisation-based approaches to clear flight control laws over continuous regions of the flight envelope is demonstrated. An optimisation-based method for computing worst-case pilot inputs is also developed, and compared with current industrial approaches for this problem. The importance of explicitly considering uncertainty in aircraft parameters when computing worst-case pilot demands is clearly demonstrated. Preliminary results on extending the proposed framework to the problems of limit-cycle analysis and robustness analysis in the pres ence of time-varying uncertainties are also included. A new method for the design of anti-windup compensators for nonlinear constrained systems controlled using nonlinear dynamics inversion control schemes is presented and successfully applied to some simple examples. An algorithm based on the use of global optimisation is proposed to design the anti-windup compensator. Some conclusions are drawn from the results of the research presented in the thesis, and directions for future work are identified.

  14. A Synthesis of Equilibrium and Historical Models of Landform Development.

    Science.gov (United States)

    Renwick, William H.

    1985-01-01

    The synthesis of two approaches that can be used in teaching geomorphology is described. The equilibrium approach explains landforms and landform change in terms of equilibrium between landforms and controlling processes. The historical approach draws on climatic geomorphology to describe the effects of Quaternary climatic and tectonic events on…

  15. Thermodynamic parameters for mixtures of quartz under shock wave loading in views of the equilibrium model

    International Nuclear Information System (INIS)

    Maevskii, K. K.; Kinelovskii, S. A.

    2015-01-01

    The numerical results of modeling of shock wave loading of mixtures with the SiO 2 component are presented. The TEC (thermodynamic equilibrium component) model is employed to describe the behavior of solid and porous multicomponent mixtures and alloys under shock wave loading. State equations of a Mie–Grüneisen type are used to describe the behavior of condensed phases, taking into account the temperature dependence of the Grüneisen coefficient, gas in pores is one of the components of the environment. The model is based on the assumption that all components of the mixture under shock-wave loading are in thermodynamic equilibrium. The calculation results are compared with the experimental data derived by various authors. The behavior of the mixture containing components with a phase transition under high dynamic loads is described

  16. Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation

    Science.gov (United States)

    Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari

    2016-07-01

    In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.

  17. Use of artificial intelligence techniques for optimisation of co-combustion of coal with biomass

    Energy Technology Data Exchange (ETDEWEB)

    Tan, C.K.; Wilcox, S.J.; Ward, J. [University of Glamorgan, Pontypridd (United Kingdom). Division of Mechanical Engineering

    2006-03-15

    The optimisation of burner operation in conventional pulverised-coal-fired boilers for co-combustion applications represents a significant challenge This paper describes a strategic framework in which Artificial Intelligence (AI) techniques can be applied to solve such an optimisation problem. The effectiveness of the proposed system is demonstrated by a case study that simulates the co-combustion of coal with sewage sludge in a 500-kW pilot-scale combustion rig equipped with a swirl stabilised low-NOx burner. A series of Computational Fluid Dynamics (CFD) simulations were performed to generate data for different operating conditions, which were then used to train several Artificial Neural Networks (ANNs) to predict the co-combustion performance. Once trained, the ANNs were able to make estimations of unseen situations in a fraction of the time taken by the CFD simulation. Consequently, the networks were capable of representing the underlying physics of the CFD models and could be executed efficiently for a large number of iterations as required by optimisation techniques based on Evolutionary Algorithms (EAs). Four operating parameters of the burner, namely the swirl angles and flow rates of the secondary and tertiary combustion air were optimised with the objective of minimising the NOx and CO emissions as well as the unburned carbon at the furnace exit. The results suggest that ANNs combined with EAs provide a useful tool for optimising co-combustion processes.

  18. Analysis of a No Equilibrium Linear Resistive-Capacitive-Inductance Shunted Junction Model, Dynamics, Synchronization, and Application to Digital Cryptography in Its Fractional-Order Form

    Directory of Open Access Journals (Sweden)

    Sifeu Takougang Kingni

    2017-01-01

    Full Text Available A linear resistive-capacitive-inductance shunted junction (LRCLSJ model obtained by replacing the nonlinear piecewise resistance of a nonlinear resistive-capacitive-inductance shunted junction (NRCLSJ model by a linear resistance is analyzed in this paper. The LRCLSJ model has two or no equilibrium points depending on the dc bias current. For a suitable choice of the parameters, the LRCLSJ model without equilibrium point can exhibit regular and fast spiking, intrinsic and periodic bursting, and periodic and chaotic behaviors. We show that the LRCLSJ model displays similar dynamical behaviors as the NRCLSJ model. Moreover the coexistence between periodic and chaotic attractors is found in the LRCLSJ model for specific parameters. The lowest order of the commensurate form of the no equilibrium LRCLSJ model to exhibit chaotic behavior is found to be 2.934. Moreover, adaptive finite-time synchronization with parameter estimation is applied to achieve synchronization of unidirectional coupled identical fractional-order form of chaotic no equilibrium LRCLSJ models. Finally, a cryptographic encryption scheme with the help of the finite-time synchronization of fractional-order chaotic no equilibrium LRCLSJ models is illustrated through a numerical example, showing that a high level security device can be produced using this system.

  19. Equilibrium Solubility of CO2 in Alkanolamines

    DEFF Research Database (Denmark)

    Waseem Arshad, Muhammad; Fosbøl, Philip Loldrup; von Solms, Nicolas

    2014-01-01

    Equilibrium solubility of CO2 were measured in aqueous solutions of Monoethanolamine (MEA) and N,N-diethylethanolamine(DEEA). Equilibrium cells are generally used for these measurements. In this study, the equilibrium data were measured from the calorimetry. For this purpose a reaction calorimeter...... (model CPA 122 from ChemiSens AB, Sweden) was used. The advantage of this method is being the measurement of both heats of absorption and equilibrium solubility data of CO2 at the same time. The measurements were performed for 30 mass % MEA and 5M DEEA solutions as a function of CO2 loading at three...... different temperatures 40, 80 and 120 ºC. The measured 30 mass % MEA and 5M DEEA data were compared with the literature data obtained from different equilibrium cells which validated the use of calorimeters for equilibrium solubility measurements....

  20. A comparative study of marriage in honey bees optimisation (MBO ...

    African Journals Online (AJOL)

    2012-02-15

    Feb 15, 2012 ... In a typical mating, the queen mates with 7 to 20 drones. Each time the .... Honey bee mating optimisation model's pseudo-code ... for this analysis, which consists of 47 years of monthly time ... tive of Karkheh Reservoir is to control and regulate the flow of ..... Masters thesis, Maastricht University, Maastricht.

  1. Plasma equilibrium and stability in stellarators

    International Nuclear Information System (INIS)

    Pustovitov, V.D.; Shafranov, V.D.

    1987-01-01

    A review of theoretical methods of investigating plasma equilibrium and stability in stellarators is given. Principles forming the basis of toroidal plasma equilibrium and its stabilization, and the main results of analytical theory and numerical calculations are presented. Configurations with spiral symmetry and usual stellarators with plane axis and spiral fields are considered in detail. Derivation of scalar two-dimensional equations, describing equilibrium in these systems is given. These equations were used to obtain one-dimensional equations for displacement and ellipticity of magnetic surfaces. The model of weak-elliptic displaced surfaces was used to consider the evolution of plasma equilibrium in stellarators after elevation of its pressure: change of profile of rotational transformation after change of plasma pressure, current generation during its fast heating and its successive damping due to finite plasma conductivity were described. The derivation of equations of small oscillations in the form, suitable for local disturbance investigation is presented. These equations were used to obtain Mercier criteria and ballon model equations. General sufficient conditions of plasma stability in systems with magnetic confinement were derived

  2. An Agent-Based Model for Optimization of Road Width and Public Transport Frequency

    Directory of Open Access Journals (Sweden)

    Mark E. Koryagin

    2015-04-01

    Full Text Available An urban passenger transportation problem is studied. Municipal authorities and passengers are regarded as participants in the passenger transportation system. The municipal authorities have to optimise road width and public transport frequency. The road consists of a dedicated bus lane and lanes for passenger cars. The car travel time depends on the number of road lanes and passengers’ choice of travel mode. The passengers’ goal is to minimize total travel costs, including time value. The passengers try to find the optimal ratio between public transport and cars. The conflict between municipal authorities and the passengers is described as a game theoretic model. The existence of Nash equilibrium in the model is proved. The numerical example shows the influence of the value of time and intensity of passenger flow on the equilibrium road width and public transport frequency.

  3. Comparison of the Marcus and Pekar partitions in the context of non-equilibrium, polarizable-continuum solvation models

    International Nuclear Information System (INIS)

    You, Zhi-Qiang; Herbert, John M.; Mewes, Jan-Michael; Dreuw, Andreas

    2015-01-01

    The Marcus and Pekar partitions are common, alternative models to describe the non-equilibrium dielectric polarization response that accompanies instantaneous perturbation of a solute embedded in a dielectric continuum. Examples of such a perturbation include vertical electronic excitation and vertical ionization of a solution-phase molecule. Here, we provide a general derivation of the accompanying polarization response, for a quantum-mechanical solute described within the framework of a polarizable continuum model (PCM) of electrostatic solvation. Although the non-equilibrium free energy is formally equivalent within the two partitions, albeit partitioned differently into “fast” versus “slow” polarization contributions, discretization of the PCM integral equations fails to preserve certain symmetries contained in these equations (except in the case of the conductor-like models or when the solute cavity is spherical), leading to alternative, non-equivalent matrix equations. Unlike the total equilibrium solvation energy, however, which can differ dramatically between different formulations, we demonstrate that the equivalence of the Marcus and Pekar partitions for the non-equilibrium solvation correction is preserved to high accuracy. Differences in vertical excitation and ionization energies are <0.2 eV (and often <0.01 eV), even for systems specifically selected to afford a large polarization response. Numerical results therefore support the interchangeability of the Marcus and Pekar partitions, but also caution against relying too much on the fast PCM charges for interpretive value, as these charges differ greatly between the two partitions, especially in polar solvents

  4. Comparison of the Marcus and Pekar partitions in the context of non-equilibrium, polarizable-continuum solvation models

    Energy Technology Data Exchange (ETDEWEB)

    You, Zhi-Qiang; Herbert, John M., E-mail: herbert@chemistry.ohio-state.edu [Department of Chemistry and Biochemistry, The Ohio State University, Columbus, Ohio 43210 (United States); Mewes, Jan-Michael; Dreuw, Andreas [Interdisciplinary Center for Scientific Computing, Ruprechts-Karls University, Im Neuenheimer Feld 368, 69120 Heidelberg (Germany)

    2015-11-28

    The Marcus and Pekar partitions are common, alternative models to describe the non-equilibrium dielectric polarization response that accompanies instantaneous perturbation of a solute embedded in a dielectric continuum. Examples of such a perturbation include vertical electronic excitation and vertical ionization of a solution-phase molecule. Here, we provide a general derivation of the accompanying polarization response, for a quantum-mechanical solute described within the framework of a polarizable continuum model (PCM) of electrostatic solvation. Although the non-equilibrium free energy is formally equivalent within the two partitions, albeit partitioned differently into “fast” versus “slow” polarization contributions, discretization of the PCM integral equations fails to preserve certain symmetries contained in these equations (except in the case of the conductor-like models or when the solute cavity is spherical), leading to alternative, non-equivalent matrix equations. Unlike the total equilibrium solvation energy, however, which can differ dramatically between different formulations, we demonstrate that the equivalence of the Marcus and Pekar partitions for the non-equilibrium solvation correction is preserved to high accuracy. Differences in vertical excitation and ionization energies are <0.2 eV (and often <0.01 eV), even for systems specifically selected to afford a large polarization response. Numerical results therefore support the interchangeability of the Marcus and Pekar partitions, but also caution against relying too much on the fast PCM charges for interpretive value, as these charges differ greatly between the two partitions, especially in polar solvents.

  5. Entropy analysis on non-equilibrium two-phase flow models

    International Nuclear Information System (INIS)

    Karwat, H.; Ruan, Y.Q.

    1995-01-01

    A method of entropy analysis according to the second law of thermodynamics is proposed for the assessment of a class of practical non-equilibrium two-phase flow models. Entropy conditions are derived directly from a local instantaneous formulation for an arbitrary control volume of a structural two-phase fluid, which are finally expressed in terms of the averaged thermodynamic independent variables and their time derivatives as well as the boundary conditions for the volume. On the basis of a widely used thermal-hydraulic system code it is demonstrated with practical examples that entropy production rates in control volumes can be numerically quantified by using the data from the output data files. Entropy analysis using the proposed method is useful in identifying some potential problems in two-phase flow models and predictions as well as in studying the effects of some free parameters in closure relationships

  6. Entropy analysis on non-equilibrium two-phase flow models

    Energy Technology Data Exchange (ETDEWEB)

    Karwat, H.; Ruan, Y.Q. [Technische Universitaet Muenchen, Garching (Germany)

    1995-09-01

    A method of entropy analysis according to the second law of thermodynamics is proposed for the assessment of a class of practical non-equilibrium two-phase flow models. Entropy conditions are derived directly from a local instantaneous formulation for an arbitrary control volume of a structural two-phase fluid, which are finally expressed in terms of the averaged thermodynamic independent variables and their time derivatives as well as the boundary conditions for the volume. On the basis of a widely used thermal-hydraulic system code it is demonstrated with practical examples that entropy production rates in control volumes can be numerically quantified by using the data from the output data files. Entropy analysis using the proposed method is useful in identifying some potential problems in two-phase flow models and predictions as well as in studying the effects of some free parameters in closure relationships.

  7. Statistical thermodynamics of equilibrium polymers at interfaces

    NARCIS (Netherlands)

    Gucht, van der J.; Besseling, N.A.M.

    2002-01-01

    The behavior of a solution of equilibrium polymers (or living polymers) at an interface is studied, using a Bethe-Guggenheim lattice model for molecules with orientation dependent interactions. The density profile of polymers and the chain length distribution are calculated. For equilibrium polymers

  8. Chaos in a dynamic model of urban transportation network flow based on user equilibrium states

    International Nuclear Information System (INIS)

    Xu Meng; Gao Ziyou

    2009-01-01

    In this study, we investigate the dynamical behavior of network traffic flow. We first build a two-stage mathematical model to analyze the complex behavior of network flow, a dynamical model, which is based on the dynamical gravity model proposed by Dendrinos and Sonis [Dendrinos DS, Sonis M. Chaos and social-spatial dynamic. Berlin: Springer-Verlag; 1990] is used to estimate the number of trips. Considering the fact that the Origin-Destination (O-D) trip cost in the traffic network is hard to express as a functional form, in the second stage, the user equilibrium network assignment model was used to estimate the trip cost, which is the minimum cost of used path when user equilibrium (UE) conditions are satisfied. It is important to use UE to estimate the O-D cost, since a connection is built among link flow, path flow, and O-D flow. The dynamical model describes the variations of O-D flows over discrete time periods, such as each day and each week. It is shown that even in a system with dimensions equal to two, chaos phenomenon still exists. A 'Chaos Propagation' phenomenon is found in the given model.

  9. Removal of semivolatiles from soils by steam stripping. 1. A local equilibrium model

    International Nuclear Information System (INIS)

    Wilson, D.J.; Clarke, A.N.

    1992-01-01

    A mathematical model for the in-situ steam stripping of volatile and semivolatile organics from contaminated vadose zone soils at hazardous waste sites is developed. A single steam injection well is modeled. The model assumes that the pneumatic permeability of the soil is spatially constant and isotropic, that the adsorption isotherm of the contaminant is linear, and that the local equilibrium approximation is adequate. The model is used to explore the streamlines and transit times of the injected steam as well as the effects of injection well depth and contaminant distribution on the time required for remediation

  10. Establishing Local Reference Dose Values and Optimisation Strategies

    International Nuclear Information System (INIS)

    Connolly, P.; Moores, B.M.

    2000-01-01

    The revised EC Patient Directive 97/43 EURATOM introduces the concepts of clinical audit, diagnostic reference levels and optimisation of radiation protection in diagnostic radiology. The application of reference dose levels in practice involves the establishment of reference dose values as actual measurable operational quantities. These values should then form part of an ongoing optimisation and audit programme against which routine performance can be compared. The CEC Quality Criteria for Radiographic Images provides guidance reference dose values against which local performance can be compared. In many cases these values can be improved upon quite considerably. This paper presents the results of a local initiative in the North West of the UK aimed at establishing local reference dose values for a number of major hospital sites. The purpose of this initiative is to establish a foundation for both optimisation strategies and clinical audit as an ongoing and routine practice. The paper presents results from an ongoing trial involving patient dose measurements for several radiological examinations upon the sites. The results of an attempt to establish local reference dose values from measured dose values and to employ them in optimisation strategies are presented. In particular emphasis is placed on the routine quality control programmes necessary to underpin this strategy including the effective data management of results from such programmes and how they can be employed to optimisation practices. (author)

  11. Near-wall extension of a non-equilibrium, omega-based Reynolds stress model

    International Nuclear Information System (INIS)

    Nguyen, Tue; Behr, Marek; Reinartz, Birgit

    2011-01-01

    In this paper, the development of a new ω-based Reynolds stress model that is consistent with asymptotic analysis in the near wall region and with rapid distortion theory in homogeneous turbulence is reported. The model is based on the SSG/LRR-ω model developed by Eisfeld (2006) with three main modifications. Firstly, the near wall behaviors of the redistribution, dissipation and diffusion terms are modified according to the asymptotic analysis and a new blending function based on low Reynolds number is proposed. Secondly, an anisotropic dissipation tensor based on the Reynolds stress inhomogeneity (Jakirlic et al., 2007) is used instead of the original isotropic model. Lastly, the SSG redistribution term, which is activated far from the wall, is replaced by Speziale's non-equilibrium model (Speziale, 1998).

  12. For Time-Continuous Optimisation

    DEFF Research Database (Denmark)

    Heinrich, Mary Katherine; Ayres, Phil

    2016-01-01

    Strategies for optimisation in design normatively assume an artefact end-point, disallowing continuous architecture that engages living systems, dynamic behaviour, and complex systems. In our Flora Robotica investigations of symbiotic plant-robot bio-hybrids, we re- quire computational tools...

  13. Statistical equilibrium calculations for silicon in early-type model stellar atmospheres

    International Nuclear Information System (INIS)

    Kamp, L.W.

    1976-02-01

    Line profiles of 36 multiplets of silicon (Si) II, III, and IV were computed for a grid of model atmospheres covering the range from 15,000 to 35,000 K in effective temperature and 2.5 to 4.5 in log (gravity). The computations involved simultaneous solution of the steady-state statistical equilibrium equations for the populations and of the equation of radiative transfer in the lines. The variables were linearized, and successive corrections were computed until a minimal accuracy of 1/1000 in the line intensities was reached. The common assumption of local thermodynamic equilibrium (LTE) was dropped. The model atmospheres used also were computed by non-LTE methods. Some effects that were incorporated into the calculations were the depression of the continuum by free electrons, hydrogen and ionized helium line blocking, and auto-ionization and dielectronic recombination, which later were found to be insignificant. Use of radiation damping and detailed electron (quadratic Stark) damping constants had small but significant effects on the strong resonance lines of Si III and IV. For weak and intermediate-strength lines, large differences with respect to LTE computations, the results of which are also presented, were found in line shapes and strengths. For the strong lines the differences are generally small, except for the models at the hot, low-gravity extreme of the range. These computations should be useful in the interpretation of the spectra of stars in the spectral range B0--B5, luminosity classes III, IV, and V

  14. Flow optimisation of a biomass mixer; Stroemungstechnische Optimierung eines Biomasse-Ruehrwerks

    Energy Technology Data Exchange (ETDEWEB)

    Casartelli, E.; Waser, R. [Hochschule fuer Technik und Architektur Luzern (HTA), Horw (Switzerland); Fankhauser, H. [Fankhauser Maschinenfabrik, Malters (Switzerland)

    2007-03-15

    This illustrated final report for the Swiss Federal Office of Energy (SFOE) reports on the optimisation of a mixing system used in biomass reactors. Aim of this work was to improve the fluid dynamic qualities of the mixer in order to increase its efficiency while, at the same time, maintaining robustness and low price. Investigative work performed with CFD (Computational Fluid Dynamics) is reported on. CFD is quoted by the authors as being very effective in solving such optimisation problems as it is suited to flows that are not easily accessible for analysis. Experiments were performed on a fermenter / mixer model in order to confirm the computational findings. The results obtained with two and three-dimensional simulations are presented and discussed, as are those resulting from the tests with the 1:10 scale model of a digester. Initial tests with the newly developed mixer-propellers in a real-life biogas installation are reported on and further tests to be made are listed.

  15. Risk Route Choice Analysis and the Equilibrium Model under Anticipated Regret Theory

    Directory of Open Access Journals (Sweden)

    pengcheng yuan

    2014-02-01

    Full Text Available The assumption about travellers’ route choice behaviour has major influence on the traffic flow equilibrium analysis. Previous studies about the travellers’ route choice were mainly based on the expected utility maximization theory. However, with the gradually increasing knowledge about the uncertainty of the transportation system, the researchers have realized that there is much constraint in expected util­ity maximization theory, because expected utility maximiza­tion requires travellers to be ‘absolutely rational’; but in fact, travellers are not truly ‘absolutely rational’. The anticipated regret theory proposes an alternative framework to the tra­ditional risk-taking in route choice behaviour which might be more scientific and reasonable. We have applied the antici­pated regret theory to the analysis of the risk route choosing process, and constructed an anticipated regret utility func­tion. By a simple case which includes two parallel routes, the route choosing results influenced by the risk aversion degree, regret degree and the environment risk degree have been analyzed. Moreover, the user equilibrium model based on the anticipated regret theory has been established. The equivalence and the uniqueness of the model are proved; an efficacious algorithm is also proposed to solve the model. Both the model and the algorithm are demonstrated in a real network. By an experiment, the model results and the real data have been compared. It was found that the model re­sults can be similar to the real data if a proper regret degree parameter is selected. This illustrates that the model can better explain the risk route choosing behaviour. Moreover, it was also found that the traveller’ regret degree increases when the environment becomes more and more risky.

  16. Modelling study, efficiency analysis and optimisation of large-scale Adiabatic Compressed Air Energy Storage systems with low-temperature thermal storage

    International Nuclear Information System (INIS)

    Luo, Xing; Wang, Jihong; Krupke, Christopher; Wang, Yue; Sheng, Yong; Li, Jian; Xu, Yujie; Wang, Dan; Miao, Shihong; Chen, Haisheng

    2016-01-01

    Highlights: • The paper presents an A-CAES system thermodynamic model with low temperature thermal energy storage integration. • The initial parameter value ranges for A-CAES system simulation are identified from the study of a CAES plant in operation. • The strategies of system efficiency improvement are investigated via a parametric study with a sensitivity analysis. • Various system configurations are discussed for analysing the efficiency improvement potentials. - Abstract: The key feature of Adiabatic Compressed Air Energy Storage (A-CAES) is the reuse of the heat generated from the air compression process at the stage of air expansion. This increases the complexity of the whole system since the heat exchange and thermal storage units must have the capacities and performance to match the air compression/expansion units. Thus it raises a strong demand in the whole system modelling and simulation tool for A-CAES system optimisation. The paper presents a new whole system mathematical model for A-CAES with simulation implementation and the model is developed with consideration of lowing capital cost of the system. The paper then focuses on the study of system efficiency improvement strategies via parametric analysis and system structure optimisation. The paper investigates how the system efficiency is affected by the system component performance and parameters. From the study, the key parameters are identified, which give dominant influences in improving the system efficiency. The study is extended onto optimal system configuration and the recommendations are made for achieving higher efficiency, which provides a useful guidance for A-CAES system design.

  17. CFD optimisation of a stadium roof geometry: a qualitative study to improve the wind microenvironment

    Directory of Open Access Journals (Sweden)

    Sofotasiou Polytimi

    2017-01-01

    Full Text Available The complexity of the built environment requires the adoption of coupled techniques to predict the flow phenomena and provide optimum design solutions. In this study, coupled computational fluid dynamics (CFD and response surface methodology (RSM optimisation tools are employed to investigate the parameters that determine the wind comfort in a two-dimensional stadium model, by optimising the roof geometry. The roof height, width and length are evaluated against the flow homogeneity at the spectator terraces and the playing field area, the roof flow rate and the average interior pressure. Based on non-parametric regression analysis, both symmetric and asymmetric configurations are considered for optimisation. The optimum design solutions revealed that it is achievable to provide an improved wind environment in both playing field area and spectator terraces, giving a further insight on the interrelations of the parameters involved. Considering the limitations of conducting a two-dimensional study, the obtained results may beneficially be used as a basis for the optimisation of a complex three-dimensional stadium structure and thus become an important design guide for stadium structures.

  18. Revealing patterns of cultural transmission from frequency data: equilibrium and non-equilibrium assumptions

    Science.gov (United States)

    Crema, Enrico R.; Kandler, Anne; Shennan, Stephen

    2016-12-01

    A long tradition of cultural evolutionary studies has developed a rich repertoire of mathematical models of social learning. Early studies have laid the foundation of more recent endeavours to infer patterns of cultural transmission from observed frequencies of a variety of cultural data, from decorative motifs on potsherds to baby names and musical preferences. While this wide range of applications provides an opportunity for the development of generalisable analytical workflows, archaeological data present new questions and challenges that require further methodological and theoretical discussion. Here we examine the decorative motifs of Neolithic pottery from an archaeological assemblage in Western Germany, and argue that the widely used (and relatively undiscussed) assumption that observed frequencies are the result of a system in equilibrium conditions is unwarranted, and can lead to incorrect conclusions. We analyse our data with a simulation-based inferential framework that can overcome some of the intrinsic limitations in archaeological data, as well as handle both equilibrium conditions and instances where the mode of cultural transmission is time-variant. Results suggest that none of the models examined can produce the observed pattern under equilibrium conditions, and suggest. instead temporal shifts in the patterns of cultural transmission.

  19. Optimisation of process parameters on thin shell part using response surface methodology (RSM)

    Science.gov (United States)

    Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.

    2017-09-01

    This study is carried out to focus on optimisation of process parameters by simulation using Autodesk Moldflow Insight (AMI) software. The process parameters are taken as the input in order to analyse the warpage value which is the output in this study. There are some significant parameters that have been used which are melt temperature, mould temperature, packing pressure, and cooling time. A plastic part made of Polypropylene (PP) has been selected as the study part. Optimisation of process parameters is applied in Design Expert software with the aim to minimise the obtained warpage value. Response Surface Methodology (RSM) has been applied in this study together with Analysis of Variance (ANOVA) in order to investigate the interactions between parameters that are significant to the warpage value. Thus, the optimised warpage value can be obtained using the model designed using RSM due to its minimum error value. This study comes out with the warpage value improved by using RSM.

  20. Group search optimiser-based optimal bidding strategies with no Karush-Kuhn-Tucker optimality conditions

    Science.gov (United States)

    Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.

    2017-03-01

    General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.

  1. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    Science.gov (United States)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  2. The onset of double diffusive convection in a viscoelastic fluid-saturated porous layer with non-equilibrium model.

    Directory of Open Access Journals (Sweden)

    Zhixin Yang

    Full Text Available The onset of double diffusive convection in a viscoelastic fluid-saturated porous layer is studied when the fluid and solid phase are not in local thermal equilibrium. The modified Darcy model is used for the momentum equation and a two-field model is used for energy equation each representing the fluid and solid phases separately. The effect of thermal non-equilibrium on the onset of double diffusive convection is discussed. The critical Rayleigh number and the corresponding wave number for the exchange of stability and over-stability are obtained, and the onset criterion for stationary and oscillatory convection is derived analytically and discussed numerically.

  3. 3D printed fluidics with embedded analytic functionality for automated reaction optimisation.

    Science.gov (United States)

    Capel, Andrew J; Wright, Andrew; Harding, Matthew J; Weaver, George W; Li, Yuqi; Harris, Russell A; Edmondson, Steve; Goodridge, Ruth D; Christie, Steven D R

    2017-01-01

    Additive manufacturing or '3D printing' is being developed as a novel manufacturing process for the production of bespoke micro- and milliscale fluidic devices. When coupled with online monitoring and optimisation software, this offers an advanced, customised method for performing automated chemical synthesis. This paper reports the use of two additive manufacturing processes, stereolithography and selective laser melting, to create multifunctional fluidic devices with embedded reaction monitoring capability. The selectively laser melted parts are the first published examples of multifunctional 3D printed metal fluidic devices. These devices allow high temperature and pressure chemistry to be performed in solvent systems destructive to the majority of devices manufactured via stereolithography, polymer jetting and fused deposition modelling processes previously utilised for this application. These devices were integrated with commercially available flow chemistry, chromatographic and spectroscopic analysis equipment, allowing automated online and inline optimisation of the reaction medium. This set-up allowed the optimisation of two reactions, a ketone functional group interconversion and a fused polycyclic heterocycle formation, via spectroscopic and chromatographic analysis.

  4. 3D printed fluidics with embedded analytic functionality for automated reaction optimisation

    Directory of Open Access Journals (Sweden)

    Andrew J. Capel

    2017-01-01

    Full Text Available Additive manufacturing or ‘3D printing’ is being developed as a novel manufacturing process for the production of bespoke micro- and milliscale fluidic devices. When coupled with online monitoring and optimisation software, this offers an advanced, customised method for performing automated chemical synthesis. This paper reports the use of two additive manufacturing processes, stereolithography and selective laser melting, to create multifunctional fluidic devices with embedded reaction monitoring capability. The selectively laser melted parts are the first published examples of multifunctional 3D printed metal fluidic devices. These devices allow high temperature and pressure chemistry to be performed in solvent systems destructive to the majority of devices manufactured via stereolithography, polymer jetting and fused deposition modelling processes previously utilised for this application. These devices were integrated with commercially available flow chemistry, chromatographic and spectroscopic analysis equipment, allowing automated online and inline optimisation of the reaction medium. This set-up allowed the optimisation of two reactions, a ketone functional group interconversion and a fused polycyclic heterocycle formation, via spectroscopic and chromatographic analysis.

  5. 3D printed fluidics with embedded analytic functionality for automated reaction optimisation

    Science.gov (United States)

    Capel, Andrew J; Wright, Andrew; Harding, Matthew J; Weaver, George W; Li, Yuqi; Harris, Russell A; Edmondson, Steve; Goodridge, Ruth D

    2017-01-01

    Additive manufacturing or ‘3D printing’ is being developed as a novel manufacturing process for the production of bespoke micro- and milliscale fluidic devices. When coupled with online monitoring and optimisation software, this offers an advanced, customised method for performing automated chemical synthesis. This paper reports the use of two additive manufacturing processes, stereolithography and selective laser melting, to create multifunctional fluidic devices with embedded reaction monitoring capability. The selectively laser melted parts are the first published examples of multifunctional 3D printed metal fluidic devices. These devices allow high temperature and pressure chemistry to be performed in solvent systems destructive to the majority of devices manufactured via stereolithography, polymer jetting and fused deposition modelling processes previously utilised for this application. These devices were integrated with commercially available flow chemistry, chromatographic and spectroscopic analysis equipment, allowing automated online and inline optimisation of the reaction medium. This set-up allowed the optimisation of two reactions, a ketone functional group interconversion and a fused polycyclic heterocycle formation, via spectroscopic and chromatographic analysis. PMID:28228852

  6. Equilibrium and non-equilibrium phenomena in arcs and torches

    NARCIS (Netherlands)

    Mullen, van der J.J.A.M.

    2000-01-01

    A general treatment of non-equilibrium plasma aspects is obtained by relating transport fluxes to equilibrium restoring processes in so-called disturbed Bilateral Relations. The (non) equilibrium stage of a small microwave induced plasma serves as case study.

  7. User perspectives in public transport timetable optimisation

    DEFF Research Database (Denmark)

    Jensen, Jens Parbo; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2014-01-01

    The present paper deals with timetable optimisation from the perspective of minimising the waiting time experienced by passengers when transferring either to or from a bus. Due to its inherent complexity, this bi-level minimisation problem is extremely difficult to solve mathematically, since tim...... on the large-scale public transport network in Denmark. The timetable optimisation approach yielded a yearly reduction in weighted waiting time equivalent to approximately 45 million Danish kroner (9 million USD)....

  8. Optimising of Steel Fiber Reinforced Concrete Mix Design | Beddar ...

    African Journals Online (AJOL)

    Optimising of Steel Fiber Reinforced Concrete Mix Design. ... as a result of the loss of mixture workability that will be translated into a difficult concrete casting in site. ... An experimental study of an optimisation method of fibres in reinforced ...

  9. Equilibrium sampling by reweighting nonequilibrium simulation trajectories.

    Science.gov (United States)

    Yang, Cheng; Wan, Biao; Xu, Shun; Wang, Yanting; Zhou, Xin

    2016-03-01

    Based on equilibrium molecular simulations, it is usually difficult to efficiently visit the whole conformational space of complex systems, which are separated into some metastable regions by high free energy barriers. Nonequilibrium simulations could enhance transitions among these metastable regions and then be applied to sample equilibrium distributions in complex systems, since the associated nonequilibrium effects can be removed by employing the Jarzynski equality (JE). Here we present such a systematical method, named reweighted nonequilibrium ensemble dynamics (RNED), to efficiently sample equilibrium conformations. The RNED is a combination of the JE and our previous reweighted ensemble dynamics (RED) method. The original JE reproduces equilibrium from lots of nonequilibrium trajectories but requires that the initial distribution of these trajectories is equilibrium. The RED reweights many equilibrium trajectories from an arbitrary initial distribution to get the equilibrium distribution, whereas the RNED has both advantages of the two methods, reproducing equilibrium from lots of nonequilibrium simulation trajectories with an arbitrary initial conformational distribution. We illustrated the application of the RNED in a toy model and in a Lennard-Jones fluid to detect its liquid-solid phase coexistence. The results indicate that the RNED sufficiently extends the application of both the original JE and the RED in equilibrium sampling of complex systems.

  10. Model uncertainties of local-thermodynamic-equilibrium K-shell spectroscopy

    Science.gov (United States)

    Nagayama, T.; Bailey, J. E.; Mancini, R. C.; Iglesias, C. A.; Hansen, S. B.; Blancard, C.; Chung, H. K.; Colgan, J.; Cosse, Ph.; Faussurier, G.; Florido, R.; Fontes, C. J.; Gilleron, F.; Golovkin, I. E.; Kilcrease, D. P.; Loisel, G.; MacFarlane, J. J.; Pain, J.-C.; Rochau, G. A.; Sherrill, M. E.; Lee, R. W.

    2016-09-01

    Local-thermodynamic-equilibrium (LTE) K-shell spectroscopy is a common tool to diagnose electron density, ne, and electron temperature, Te, of high-energy-density (HED) plasmas. Knowing the accuracy of such diagnostics is important to provide quantitative conclusions of many HED-plasma research efforts. For example, Fe opacities were recently measured at multiple conditions at the Sandia National Laboratories Z machine (Bailey et al., 2015), showing significant disagreement with modeled opacities. Since the plasma conditions were measured using K-shell spectroscopy of tracer Mg (Nagayama et al., 2014), one concern is the accuracy of the inferred Fe conditions. In this article, we investigate the K-shell spectroscopy model uncertainties by analyzing the Mg spectra computed with 11 different models at the same conditions. We find that the inferred conditions differ by ±20-30% in ne and ±2-4% in Te depending on the choice of spectral model. Also, we find that half of the Te uncertainty comes from ne uncertainty. To refine the accuracy of the K-shell spectroscopy, it is important to scrutinize and experimentally validate line-shape theory. We investigate the impact of the inferred ne and Te model uncertainty on the Fe opacity measurements. Its impact is small and does not explain the reported discrepancies.

  11. Warm-fluid description of intense beam equilibrium and electrostatic stability properties

    International Nuclear Information System (INIS)

    Lund, S.M.; Davidson, R.C.

    1998-01-01

    A nonrelativistic warm-fluid model is employed in the electrostatic approximation to investigate the equilibrium and stability properties of an unbunched, continuously focused intense ion beam. A closed macroscopic model is obtained by truncating the hierarchy of moment equations by the assumption of negligible heat flow. Equations describing self-consistent fluid equilibria are derived and elucidated with examples corresponding to thermal equilibrium, the Kapchinskij endash Vladimirskij (KV) equilibrium, and the waterbag equilibrium. Linearized fluid equations are derived that describe the evolution of small-amplitude perturbations about an arbitrary equilibrium. Electrostatic stability properties are analyzed in detail for a cold beam with step-function density profile, and then for axisymmetric flute perturbations with ∂/∂θ=0 and ∂/∂z=0 about a warm-fluid KV beam equilibrium. The radial eigenfunction describing axisymmetric flute perturbations about the KV equilibrium is found to be identical to the eigenfunction derived in a full kinetic treatment. However, in contrast to the kinetic treatment, the warm-fluid model predicts stable oscillations. None of the instabilities that are present in a kinetic description are obtained in the fluid model. A careful comparison of the mode oscillation frequencies associated with the fluid and kinetic models is made in order to delineate which stability features of a KV beam are model-dependent and which may have general applicability. copyright 1998 American Institute of Physics

  12. Answer Sets in a Fuzzy Equilibrium Logic

    Science.gov (United States)

    Schockaert, Steven; Janssen, Jeroen; Vermeir, Dirk; de Cock, Martine

    Since its introduction, answer set programming has been generalized in many directions, to cater to the needs of real-world applications. As one of the most general “classical” approaches, answer sets of arbitrary propositional theories can be defined as models in the equilibrium logic of Pearce. Fuzzy answer set programming, on the other hand, extends answer set programming with the capability of modeling continuous systems. In this paper, we combine the expressiveness of both approaches, and define answer sets of arbitrary fuzzy propositional theories as models in a fuzzification of equilibrium logic. We show that the resulting notion of answer set is compatible with existing definitions, when the syntactic restrictions of the corresponding approaches are met. We furthermore locate the complexity of the main reasoning tasks at the second level of the polynomial hierarchy. Finally, as an illustration of its modeling power, we show how fuzzy equilibrium logic can be used to find strong Nash equilibria.

  13. Designing and optimising anaerobic digestion systems: A multi-objective non-linear goal programming approach

    International Nuclear Information System (INIS)

    Nixon, J.D.

    2016-01-01

    This paper presents a method for optimising the design parameters of an anaerobic digestion (AD) system by using first-order kinetics and multi-objective non-linear goal programming. A model is outlined that determines the ideal operating tank temperature and hydraulic retention time, based on objectives for minimising levelised cost of electricity, and maximising energy potential and feedstock mass reduction. The model is demonstrated for a continuously stirred tank reactor processing food waste in two case study locations. These locations are used to investigate the influence of different environmental and economic climates on optimal conditions. A sensitivity analysis is performed to further examine the variation in optimal results for different financial assumptions and objective weightings. The results identify the conditions for the preferred tank temperature to be in the psychrophilic, mesophilic or thermophilic range. For a tank temperature of 35 °C, ideal hydraulic retention times, in terms of achieving a minimum levelised electricity cost, were found to range from 29.9 to 33 days. Whilst there is a need for more detailed information on rate constants for use in first-order models, multi-objective optimisation modelling is considered to be a promising option for AD design. - Highlights: • Nonlinear goal programming is used to optimise anaerobic digestion systems. • Multiple objectives are set including minimising the levelised cost of electricity. • A model is developed and applied to case studies for the UK and India. • Optimal decisions are made for tank temperature and retention time. • A sensitivity analysis is carried out to investigate different model objectives.

  14. Equilibrium based analytical model for estimation of pressure magnification during deflagration of hydrogen air mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Karanam, Aditya; Sharma, Pavan K.; Ganju, Sunil; Singh, Ram Kumar [Bhabha Atomic Research Centre (BARC), Mumbai (India). Reactor Safety Div.

    2016-12-15

    During postulated accident sequences in nuclear reactors, hydrogen may get released from the core and form a flammable mixture in the surrounding containment structure. Ignition of such mixtures and the subsequent pressure rise are an imminent threat for safe and sustainable operation of nuclear reactors. Methods for evaluating post ignition characteristics are important for determining the design safety margins in such scenarios. This study presents two thermo-chemical models for determining the post ignition state. The first model is based on internal energy balance while the second model uses the concept of element potentials to minimize the free energy of the system with internal energy imposed as a constraint. Predictions from both the models have been compared against published data over a wide range of mixture compositions. Important differences in the regions close to flammability limits and for stoichiometric mixtures have been identified and explained. The equilibrium model has been validated for varied temperatures and pressures representative of initial conditions that may be present in the containment during accidents. Special emphasis has been given to the understanding of the role of dissociation and its effect on equilibrium pressure, temperature and species concentrations.

  15. Equilibrium based analytical model for estimation of pressure magnification during deflagration of hydrogen air mixtures

    International Nuclear Information System (INIS)

    Karanam, Aditya; Sharma, Pavan K.; Ganju, Sunil; Singh, Ram Kumar

    2016-01-01

    During postulated accident sequences in nuclear reactors, hydrogen may get released from the core and form a flammable mixture in the surrounding containment structure. Ignition of such mixtures and the subsequent pressure rise are an imminent threat for safe and sustainable operation of nuclear reactors. Methods for evaluating post ignition characteristics are important for determining the design safety margins in such scenarios. This study presents two thermo-chemical models for determining the post ignition state. The first model is based on internal energy balance while the second model uses the concept of element potentials to minimize the free energy of the system with internal energy imposed as a constraint. Predictions from both the models have been compared against published data over a wide range of mixture compositions. Important differences in the regions close to flammability limits and for stoichiometric mixtures have been identified and explained. The equilibrium model has been validated for varied temperatures and pressures representative of initial conditions that may be present in the containment during accidents. Special emphasis has been given to the understanding of the role of dissociation and its effect on equilibrium pressure, temperature and species concentrations.

  16. Efficient modeling of reactive transport phenomena by a multispecies random walk coupled to chemical equilibrium

    International Nuclear Information System (INIS)

    Pfingsten, W.

    1996-01-01

    Safety assessments for radioactive waste repositories require a detailed knowledge of physical, chemical, hydrological, and geological processes for long time spans. In the past, individual models for hydraulics, transport, or geochemical processes were developed more or less separately to great sophistication for the individual processes. Such processes are especially important in the near field of a waste repository. Attempts have been made to couple at least two individual processes to get a more adequate description of geochemical systems. These models are called coupled codes; they couple predominantly a multicomponent transport model with a chemical reaction model. Here reactive transport is modeled by the sequentially coupled code MCOTAC that couples one-dimensional advective, dispersive, and diffusive transport with chemical equilibrium complexation and precipitation/dissolution reactions in a porous medium. Transport, described by a random walk of multispecies particles, and chemical equilibrium calculations are solved separately, coupled only by an exchange term. The modular-structured code was applied to incongruent dissolution of hydrated silicate gels, to movement of multiple solid front systems, and to an artificial, numerically difficult heterogeneous redox problem. These applications show promising features with respect to applicability to relevant problems and possibilities of extensions

  17. Lattice Boltzmann method with the cell-population equilibrium

    International Nuclear Information System (INIS)

    Zhou Xiaoyang; Cheng Bing; Shi Baochang

    2008-01-01

    The central problem of the lattice Boltzmann method (LBM) is to construct a discrete equilibrium. In this paper, a multi-speed 1D cell-model of Boltzmann equation is proposed, in which the cell-population equilibrium, a direct non-negative approximation to the continuous Maxwellian distribution, plays an important part. By applying the explicit one-order Chapman–Enskog distribution, the model reduces the transportation and collision, two basic evolution steps in LBM, to the transportation of the non-equilibrium distribution. Furthermore, 1D dam-break problem is performed and the numerical results agree well with the analytic solutions

  18. Wall ablation of heated compound-materials into non-equilibrium discharge plasmas

    Science.gov (United States)

    Wang, Weizong; Kong, Linghan; Geng, Jinyue; Wei, Fuzhi; Xia, Guangqing

    2017-02-01

    The discharge properties of the plasma bulk flow near the surface of heated compound-materials strongly affects the kinetic layer parameters modeled and manifested in the Knudsen layer. This paper extends the widely used two-layer kinetic ablation model to the ablation controlled non-equilibrium discharge due to the fact that the local thermodynamic equilibrium (LTE) approximation is often violated as a result of the interaction between the plasma and solid walls. Modifications to the governing set of equations, to account for this effect, are derived and presented by assuming that the temperature of the electrons deviates from that of the heavy particles. The ablation characteristics of one typical material, polytetrafluoroethylene (PTFE) are calculated with this improved model. The internal degrees of freedom as well as the average particle mass and specific heat ratio of the polyatomic vapor, which strongly depends on the temperature, pressure and plasma non-equilibrium degree and plays a crucial role in the accurate determination of the ablation behavior by this model, are also taken into account. Our assessment showed the significance of including such modifications related to the non-equilibrium effect in the study of vaporization of heated compound materials in ablation controlled arcs. Additionally, a two-temperature magneto-hydrodynamic (MHD) model accounting for the thermal non-equilibrium occurring near the wall surface is developed and applied into an ablation-dominated discharge for an electro-thermal chemical launch device. Special attention is paid to the interaction between the non-equilibrium plasma and the solid propellant surface. Both the mass exchange process caused by the wall ablation and plasma species deposition as well as the associated momentum and energy exchange processes are taken into account. A detailed comparison of the results of the non-equilibrium model with those of an equilibrium model is presented. The non-equilibrium results

  19. Stability of the thermodynamic equilibrium - A test of the validity of dynamic models as applied to gyroviscous perpendicular magnetohydrodynamics

    Science.gov (United States)

    Faghihi, Mustafa; Scheffel, Jan; Spies, Guenther O.

    1988-05-01

    Stability of the thermodynamic equilibrium is put forward as a simple test of the validity of dynamic equations, and is applied to perpendicular gyroviscous magnetohydrodynamics (i.e., perpendicular magnetohydrodynamics with gyroviscosity added). This model turns out to be invalid because it predicts exponentially growing Alfven waves in a spatially homogeneous static equilibrium with scalar pressure.

  20. Stability of the thermodynamic equilibrium: A test of the validity of dynamic models as applied to gyroviscous perpendicular magnetohydrodynamics

    International Nuclear Information System (INIS)

    Faghihi, M.; Scheffel, J.; Spies, G.O.

    1988-01-01

    Stability of the thermodynamic equilibrium is put forward as a simple test of the validity of dynamic equations, and is applied to perpendicular gyroviscous magnetohydrodynamics (i.e., perpendicular magnetohydrodynamics with gyroviscosity added). This model turns out to be invalid because it predicts exponentially growing Alfven waves in a spatially homogeneous static equilibrium with scalar pressure