Inverse feasibility problems of the inverse maximum flow problems
Indian Academy of Sciences (India)
199–209. c Indian Academy of Sciences. Inverse feasibility problems of the inverse maximum flow problems. ADRIAN DEACONU. ∗ and ELEONOR CIUREA. Department of Mathematics and Computer Science, Faculty of Mathematics and Informatics, Transilvania University of Brasov, Brasov, Iuliu Maniu st. 50,. Romania.
A polynomial time algorithm for solving the maximum flow problem in directed networks
International Nuclear Information System (INIS)
Tlas, M.
2015-01-01
An efficient polynomial time algorithm for solving maximum flow problems has been proposed in this paper. The algorithm is basically based on the binary representation of capacities; it solves the maximum flow problem as a sequence of O(m) shortest path problems on residual networks with nodes and m arcs. It runs in O(m"2r) time, where is the smallest integer greater than or equal to log B , and B is the largest arc capacity of the network. A numerical example has been illustrated using this proposed algorithm.(author)
Directory of Open Access Journals (Sweden)
Hadi Heidari Gharehbolagh
2016-01-01
Full Text Available This study investigates a multiowner maximum-flow network problem, which suffers from risky events. Uncertain conditions effect on proper estimation and ignoring them may mislead decision makers by overestimation. A key question is how self-governing owners in the network can cooperate with each other to maintain a reliable flow. Hence, the question is answered by providing a mathematical programming model based on applying the triangular reliability function in the decentralized networks. The proposed method concentrates on multiowner networks which suffer from risky time, cost, and capacity parameters for each network’s arcs. Some cooperative game methods such as τ-value, Shapley, and core center are presented to fairly distribute extra profit of cooperation. A numerical example including sensitivity analysis and the results of comparisons are presented. Indeed, the proposed method provides more reality in decision-making for risky systems, hence leading to significant profits in terms of real cost estimation when compared with unforeseen effects.
A local search heuristic for the Multi-Commodity k-splittable Maximum Flow Problem
DEFF Research Database (Denmark)
Gamst, Mette
2014-01-01
, a local search heuristic for solving the problem is proposed. The heuristic is an iterative shortest path procedure on a reduced graph combined with a local search procedure to modify certain path flows and prioritize the different commodities. The heuristic is tested on benchmark instances from...
Energy Technology Data Exchange (ETDEWEB)
Salinic, Slavisa [University of Kragujevac, Faculty of Mechanical Engineering, Kraljevo (RS)
2010-10-15
In this paper, an analytical solution for the problem of finding profiles of gravity flow discharge chutes required to achieve maximum exit velocity under Coulomb friction is obtained by application of variational calculus. The model of a particle which moves down a rough curve in a uniform gravitational field is used to obtain a solution of the problem for various boundary conditions. The projection sign of the normal reaction force of the rough curve onto the normal to the curve and the restriction requiring that the tangential acceleration be non-negative are introduced as the additional constraints in the form of inequalities. These inequalities are transformed into equalities by introducing new state variables. Although this is fundamentally a constrained variational problem, by further introducing a new functional with an expanded set of unknown functions, it is transformed into an unconstrained problem where broken extremals appear. The obtained equations of the chute profiles contain a certain number of unknown constants which are determined from a corresponding system of nonlinear algebraic equations. The obtained results are compared with the known results from the literature. (orig.)
Comparing branch-and-price algorithms for the Multi-Commodity k-splittable Maximum Flow Problem
DEFF Research Database (Denmark)
Gamst, Mette; Petersen, Bjørn
2012-01-01
-Protocol Label Switching. The problem has previously been solved to optimality through branch-and-price. In this paper we propose two exact solution methods both based on an alternative decomposition. The two methods differ in their branching strategy. The first method, which branches on forbidden edge sequences...
The Maximum Resource Bin Packing Problem
DEFF Research Database (Denmark)
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Modelling information flow along the human connectome using maximum flow.
Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung
2018-01-01
The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Brasch, D.J.
1986-01-01
Chemical and mineral engineering students require texts which give guidance to problem solving to complement their main theoretical texts. This book has a broad coverage of the fluid flow problems which these students may encounter. The fundamental concepts and the application of the behaviour of liquids and gases in unit operation are dealt with. The book is intended to give numerical practice; development of theory is undertaken only when elaboration of treatments available in theoretical texts is absolutely necessary
MAXIMUM PRINCIPLE FOR SUBSONIC FLOW WITH VARIABLE ENTROPY
Directory of Open Access Journals (Sweden)
B. Sizykh Grigory
2017-01-01
Full Text Available Maximum principle for subsonic flow is fair for stationary irrotational subsonic gas flows. According to this prin- ciple, if the value of the velocity is not constant everywhere, then its maximum is achieved on the boundary and only on the boundary of the considered domain. This property is used when designing form of an aircraft with a maximum critical val- ue of the Mach number: it is believed that if the local Mach number is less than unit in the incoming flow and on the body surface, then the Mach number is less then unit in all points of flow. The known proof of maximum principle for subsonic flow is based on the assumption that in the whole considered area of the flow the pressure is a function of density. For the ideal and perfect gas (the role of diffusion is negligible, and the Mendeleev-Clapeyron law is fulfilled, the pressure is a function of density if entropy is constant in the entire considered area of the flow. Shows an example of a stationary sub- sonic irrotational flow, in which the entropy has different values on different stream lines, and the pressure is not a function of density. The application of the maximum principle for subsonic flow with respect to such a flow would be unreasonable. This example shows the relevance of the question about the place of the points of maximum value of the velocity, if the entropy is not a constant. To clarify the regularities of the location of these points, was performed the analysis of the com- plete Euler equations (without any simplifying assumptions in 3-D case. The new proof of the maximum principle for sub- sonic flow was proposed. This proof does not rely on the assumption that the pressure is a function of density. Thus, it is shown that the maximum principle for subsonic flow is true for stationary subsonic irrotational flows of ideal perfect gas with variable entropy.
Handelman's hierarchy for the maximum stable set problem
Laurent, M.; Sun, Z.
2014-01-01
The maximum stable set problem is a well-known NP-hard problem in combinatorial optimization, which can be formulated as the maximization of a quadratic square-free polynomial over the (Boolean) hypercube. We investigate a hierarchy of linear programming relaxations for this problem, based on a
A Maximum Entropy Method for a Robust Portfolio Problem
Directory of Open Access Journals (Sweden)
Yingying Xu
2014-06-01
Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.
A maximum modulus theorem for the Oseen problem
Czech Academy of Sciences Publication Activity Database
Kračmar, S.; Medková, Dagmar; Nečasová, Šárka; Varnhorn, W.
2013-01-01
Roč. 192, č. 6 (2013), s. 1059-1076 ISSN 0373-3114 R&D Projects: GA ČR(CZ) GAP201/11/1304; GA MŠk LC06052 Institutional research plan: CEZ:AV0Z10190503 Keywords : Oseen problem * maximum modulus theorem * Oseen potentials Subject RIV: BA - General Mathematics Impact factor: 0.909, year: 2013 http://link.springer.com/article/10.1007%2Fs10231-012-0258-x
An Efficient Algorithm for the Maximum Distance Problem
Directory of Open Access Journals (Sweden)
Gabrielle Assunta Grün
2001-12-01
Full Text Available Efficient algorithms for temporal reasoning are essential in knowledge-based systems. This is central in many areas of Artificial Intelligence including scheduling, planning, plan recognition, and natural language understanding. As such, scalability is a crucial consideration in temporal reasoning. While reasoning in the interval algebra is NP-complete, reasoning in the less expressive point algebra is tractable. In this paper, we explore an extension to the work of Gerevini and Schubert which is based on the point algebra. In their seminal framework, temporal relations are expressed as a directed acyclic graph partitioned into chains and supported by a metagraph data structure, where time points or events are represented by vertices, and directed edges are labelled with < or ≤. They are interested in fast algorithms for determining the strongest relation between two events. They begin by developing fast algorithms for the case where all points lie on a chain. In this paper, we are interested in a generalization of this, namely we consider the problem of finding the maximum ``distance'' between two vertices in a chain ; this problem arises in real world applications such as in process control and crew scheduling. We describe an O(n time preprocessing algorithm for the maximum distance problem on chains. It allows queries for the maximum number of < edges between two vertices to be answered in O(1 time. This matches the performance of the algorithm of Gerevini and Schubert for determining the strongest relation holding between two vertices in a chain.
Topology optimization of flow problems
DEFF Research Database (Denmark)
Gersborg, Allan Roulund
2007-01-01
This thesis investigates how to apply topology optimization using the material distribution technique to steady-state viscous incompressible flow problems. The target design applications are fluid devices that are optimized with respect to minimizing the energy loss, characteristic properties...... transport in 2D Stokes flow. Using Stokes flow limits the range of applications; nonetheless, the thesis gives a proof-of-concept for the application of the method within fluid dynamic problems and it remains of interest for the design of microfluidic devices. Furthermore, the thesis contributes...... at the Technical University of Denmark. Large topology optimization problems with 2D and 3D Stokes flow modeling are solved with direct and iterative strategies employing the parallelized Sun Performance Library and the OpenMP parallelization technique, respectively....
Inverse feasibility problems of the inverse maximum flow problems
Indian Academy of Sciences (India)
Author Affiliations. Adrian Deaconu1 Eleonor Ciurea1. Department of Mathematics and Computer Science, Faculty of Mathematics and Informatics, Transilvania University of Bra¸sov, Bra¸sov, Iuliu Maniu st. 50, Romania ...
Inverse feasibility problems of the inverse maximum flow problems
Indian Academy of Sciences (India)
2016-08-26
Aug 26, 2016 ... Author Affiliations. Adrian Deaconu1 Eleonor Ciurea1. Department of Mathematics and Computer Science, Faculty of Mathematics and Informatics, Transilvania University of Bra¸sov, Bra¸sov, Iuliu Maniu st. 50, Romania ...
Analogue of Pontryagin's maximum principle for multiple integrals minimization problems
Mikhail, Zelikin
2016-01-01
The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.
Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics
DEFF Research Database (Denmark)
Schlaikjer, Malene; Jensen, Jørgen Arendt
2004-01-01
)-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...... has been compared to the cross-correlation (CC) estimator and the previously developed maximum likelihood estimator (MLE). The results show that the CMLE can handle a larger velocity search range and is capable of estimating even low velocity levels from tissue motion. The CC and the MLE produce...... for the CC and the MLE. When the velocity search range is set to twice the limit of the CC and the MLE, the number of incorrect velocity estimates are 0, 19.1, and 7.2% for the CMLE, CC, and MLE, respectively. The ability to handle a larger search range and estimating low velocity levels was confirmed...
Flow Control in Wells Turbines for Harnessing Maximum Wave Power
Garrido, Aitor J.; Garrido, Izaskun; Otaola, Erlantz; Maseda, Javier
2018-01-01
Oceans, and particularly waves, offer a huge potential for energy harnessing all over the world. Nevertheless, the performance of current energy converters does not yet allow us to use the wave energy efficiently. However, new control techniques can improve the efficiency of energy converters. In this sense, the plant sensors play a key role within the control scheme, as necessary tools for parameter measuring and monitoring that are then used as control input variables to the feedback loop. Therefore, the aim of this work is to manage the rotational speed control loop in order to optimize the output power. With the help of outward looking sensors, a Maximum Power Point Tracking (MPPT) technique is employed to maximize the system efficiency. Then, the control decisions are based on the pressure drop measured by pressure sensors located along the turbine. A complete wave-to-wire model is developed so as to validate the performance of the proposed control method. For this purpose, a novel sensor-based flow controller is implemented based on the different measured signals. Thus, the performance of the proposed controller has been analyzed and compared with a case of uncontrolled plant. The simulations demonstrate that the flow control-based MPPT strategy is able to increase the output power, and they confirm both the viability and goodness. PMID:29439408
Flow Control in Wells Turbines for Harnessing Maximum Wave Power.
Lekube, Jon; Garrido, Aitor J; Garrido, Izaskun; Otaola, Erlantz; Maseda, Javier
2018-02-10
Oceans, and particularly waves, offer a huge potential for energy harnessing all over the world. Nevertheless, the performance of current energy converters does not yet allow us to use the wave energy efficiently. However, new control techniques can improve the efficiency of energy converters. In this sense, the plant sensors play a key role within the control scheme, as necessary tools for parameter measuring and monitoring that are then used as control input variables to the feedback loop. Therefore, the aim of this work is to manage the rotational speed control loop in order to optimize the output power. With the help of outward looking sensors, a Maximum Power Point Tracking (MPPT) technique is employed to maximize the system efficiency. Then, the control decisions are based on the pressure drop measured by pressure sensors located along the turbine. A complete wave-to-wire model is developed so as to validate the performance of the proposed control method. For this purpose, a novel sensor-based flow controller is implemented based on the different measured signals. Thus, the performance of the proposed controller has been analyzed and compared with a case of uncontrolled plant. The simulations demonstrate that the flow control-based MPPT strategy is able to increase the output power, and they confirm both the viability and goodness.
Maximum Solutions of Normalized Ricci Flow on 4-Manifolds
Fang, Fuquan; Zhang, Yuguang; Zhang, Zhenlei
2008-10-01
We consider the maximum solution g( t), t ∈ [0, + ∞), to the normalized Ricci flow. Among other things, we prove that, if ( M, ω) is a smooth compact symplectic 4-manifold such that {b_2^+(M) > 1} and let g( t), t ∈ [0, ∞), be a solution to (1.3) on M whose Ricci curvature satisfies that |Ric( g( t))| ≤ 3 and additionally χ( M) = 3τ ( M) > 0, then there exists an {min mathbb{N}} , and a sequence of points { x j, k ∈ M}, j = 1, . . . , m, satisfying that, by passing to a subsequence, {{(M, g(tk+t), x_{1,k},ldots, x_{m,k})stackrel{d_{GH}}longrightarrow ({\\coprod limitsm_{j=1}} N_j , g_{infty}, x_{1,infty}, ldots, x_{m,infty}),}} t ∈ [0, ∞), in the m-pointed Gromov-Hausdorff sense for any sequence t k → ∞, where ( N j , g ∞), j = 1, . . . , m, are complete complex hyperbolic orbifolds of complex dimension 2 with at most finitely many isolated orbifold points. Moreover, the convergence is C ∞ in the non-singular part of {\\coprod _1^m Nj} and {text{Vol}_{g0}(M)=sum_{j=1}mtext{Vol}_{g_{infty}}(Nj)} , where χ( M) (resp. τ( M)) is the Euler characteristic (resp. signature) of M.
The discrete maximum principle for Galerkin solutions of elliptic problems
Czech Academy of Sciences Publication Activity Database
Vejchodský, Tomáš
2012-01-01
Roč. 10, č. 1 (2012), s. 25-43 ISSN 1895-1074 R&D Projects: GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : discrete maximum principle * monotone methods * Galerkin solution Subject RIV: BA - General Mathematics Impact factor: 0.405, year: 2012 http://www.springerlink.com/content/x73624wm23x4wj26
On discrete maximum principles for nonlinear elliptic problems
Czech Academy of Sciences Publication Activity Database
Karátson, J.; Korotov, S.; Křížek, Michal
2007-01-01
Roč. 76, č. 1 (2007), s. 99-108 ISSN 0378-4754 R&D Projects: GA MŠk 1P05ME749; GA AV ČR IAA1019201 Institutional research plan: CEZ:AV0Z10190503 Keywords : nonlinear elliptic problem * mixed boundary conditions * finite element method Subject RIV: BA - General Mathematics Impact factor: 0.738, year: 2007
Mardlijah; Jamil, Ahmad; Hanafi, Lukman; Sanjaya, Suharmadi
2017-09-01
There are so many benefit of algae. One of them is using for renewable energy and sustainable in the future. The greater growth of algae will increasing biodiesel production and the increase of algae growth is influenced by glucose, nutrients and photosynthesis process. In this paper, the optimal control problem of the growth of algae is discussed. The objective function is to maximize the concentration of dry algae while the control is the flow of carbon dioxide and the nutrition. The solution is obtained by applying the Pontryagin Maximum Principle. and the result show that the concentration of algae increased more than 15 %.
Wavelength selection in injection-driven Hele-Shaw flows: A maximum amplitude criterion
Dias, Eduardo; Miranda, Jose
2013-11-01
As in most interfacial flow problems, the standard theoretical procedure to establish wavelength selection in the viscous fingering instability is to maximize the linear growth rate. However, there are important discrepancies between previous theoretical predictions and existing experimental data. In this work we perform a linear stability analysis of the radial Hele-Shaw flow system that takes into account the combined action of viscous normal stresses and wetting effects. Most importantly, we introduce an alternative selection criterion for which the selected wavelength is determined by the maximum of the interfacial perturbation amplitude. The effectiveness of such a criterion is substantiated by the significantly improved agreement between theory and experiments. We thank CNPq (Brazilian Sponsor) for financial support.
Topology optimization of Channel flow problems
DEFF Research Database (Denmark)
Gersborg-Hansen, Allan; Sigmund, Ole; Haber, R. B.
2005-01-01
function which measures either some local aspect of the velocity field or a global quantity, such as the rate of energy dissipation. We use the finite element method to model the flow, and we solve the optimization problem with a gradient-based math-programming algorithm that is driven by analytical......This paper describes a topology design method for simple two-dimensional flow problems. We consider steady, incompressible laminar viscous flows at low to moderate Reynolds numbers. This makes the flow problem non-linear and hence a non-trivial extension of the work of [Borrvall&Petersson 2002......]. Further, the inclusion of inertia effects significantly alters the physics, enabling solutions of new classes of optimization problems, such as velocity--driven switches, that are not addressed by the earlier method. Specifically, we determine optimal layouts of channel flows that extremize a cost...
Comparison Between Bayesian and Maximum Entropy Analyses of Flow Networks†
Directory of Open Access Journals (Sweden)
Steven H. Waldrip
2017-02-01
Full Text Available We compare the application of Bayesian inference and the maximum entropy (MaxEnt method for the analysis of ﬂow networks, such as water, electrical and transport networks. The two methods have the advantage of allowing a probabilistic prediction of ﬂow rates and other variables, when there is insufﬁcient information to obtain a deterministic solution, and also allow the effects of uncertainty to be included. Both methods of inference update a prior to a posterior probability density function (pdf by the inclusion of new information, in the form of data or constraints. The MaxEnt method maximises an entropy function subject to constraints, using the method of Lagrange multipliers,to give the posterior, while the Bayesian method ﬁnds its posterior by multiplying the prior with likelihood functions incorporating the measured data. In this study, we examine MaxEnt using soft constraints, either included in the prior or as probabilistic constraints, in addition to standard moment constraints. We show that when the prior is Gaussian,both Bayesian inference and the MaxEnt method with soft prior constraints give the same posterior means, but their covariances are different. In the Bayesian method, the interactions between variables are applied through the likelihood function, using second or higher-order cross-terms within the posterior pdf. In contrast, the MaxEnt method incorporates interactions between variables using Lagrange multipliers, avoiding second-order correlation terms in the posterior covariance. The MaxEnt method with soft prior constraints, therefore, has a numerical advantage over Bayesian inference, in that the covariance terms are avoided in its integrations. The second MaxEnt method with soft probabilistic constraints is shown to give posterior means of similar, but not identical, structure to the other two methods, due to its different formulation.
Advances in multiphase flow and related problems
International Nuclear Information System (INIS)
Papanicolaou, G.
1986-01-01
Proceedings of a workshop in multiphase flow held at Leesburg, Va. in June 1986 representing a cross-disciplinary approach to theoretical as well as computational problems in multiphase flow. Topics include composites, phase transitions, fluid-particle systems, and bubbly liquids
Generalized Riemann problem for reactive flows
International Nuclear Information System (INIS)
Ben-Artzi, M.
1989-01-01
A generalized Riemann problem is introduced for the equations of reactive non-viscous compressible flow in one space dimension. Initial data are assumed to be linearly distributed on both sides of a jump discontinuity. The resolution of the singularity is studied and the first-order variation (in time) of flow variables is given in exact form. copyright 1989 Academic Press, Inc
Directory of Open Access Journals (Sweden)
M. E. Haji Abadi
2013-09-01
Full Text Available In this paper, the continuous optimal control theory is used to model and solve the maximum entropy problem for a continuous random variable. The maximum entropy principle provides a method to obtain least-biased probability density function (Pdf estimation. In this paper, to find a closed form solution for the maximum entropy problem with any number of moment constraints, the entropy is considered as a functional measure and the moment constraints are considered as the state equations. Therefore, the Pdf estimation problem can be reformulated as the optimal control problem. Finally, the proposed method is applied to estimate the Pdf of the hourly electricity prices of New England and Ontario electricity markets. Obtained results show the efficiency of the proposed method.
Directory of Open Access Journals (Sweden)
Jingtao Shi
2013-01-01
Full Text Available This paper is concerned with the relationship between maximum principle and dynamic programming for stochastic recursive optimal control problems. Under certain differentiability conditions, relations among the adjoint processes, the generalized Hamiltonian function, and the value function are given. A linear quadratic recursive utility portfolio optimization problem in the financial engineering is discussed as an explicitly illustrated example of the main result.
Optimal control problems with delay, the maximum principle and necessary conditions
Frankena, J.F.
1975-01-01
In this paper we consider a rather general optimal control problem involving ordinary differential equations with delayed arguments and a set of equality and inequality restrictions on state- and control variables. For this problem a maximum principle is given in pointwise form, using variational
Low reproducibility of maximum urinary flow rate determined by portable flowmetry
Sonke, G. S.; Kiemeney, L. A.; Verbeek, A. L.; Kortmann, B. B.; Debruyne, F. M.; de la Rosette, J. J.
1999-01-01
To evaluate the reproducibility in maximum urinary flow rate (Qmax) in men with lower urinary tract symptoms (LUTSs) and to determine the number of flows needed to obtain a specified reliability in mean Qmax, 212 patients with LUTSs (mean age, 62 years) referred to the University Hospital Nijmegen,
Directory of Open Access Journals (Sweden)
Domoshnitsky Alexander
2009-01-01
Full Text Available We obtain the maximum principles for the first-order neutral functional differential equation where , and are linear continuous operators, and are positive operators, is the space of continuous functions, and is the space of essentially bounded functions defined on . New tests on positivity of the Cauchy function and its derivative are proposed. Results on existence and uniqueness of solutions for various boundary value problems are obtained on the basis of the maximum principles.
Dynamic Flow Management Problems in Air Transportation
Patterson, Sarah Stock
1997-01-01
In 1995, over six hundred thousand licensed pilots flew nearly thirty-five million flights into over eighteen thousand U.S. airports, logging more than 519 billion passenger miles. Since demand for air travel has increased by more than 50% in the last decade while capacity has stagnated, congestion is a problem of undeniable practical significance. In this thesis, we will develop optimization techniques that reduce the impact of congestion on the national airspace. We start by determining the optimal release times for flights into the airspace and the optimal speed adjustment while airborne taking into account the capacitated airspace. This is called the Air Traffic Flow Management Problem (TFMP). We address the complexity, showing that it is NP-hard. We build an integer programming formulation that is quite strong as some of the proposed inequalities are facet defining for the convex hull of solutions. For practical problems, the solutions of the LP relaxation of the TFMP are very often integral. In essence, we reduce the problem to efficiently solving large scale linear programming problems. Thus, the computation times are reasonably small for large scale, practical problems involving thousands of flights. Next, we address the problem of determining how to reroute aircraft in the airspace system when faced with dynamically changing weather conditions. This is called the Air Traffic Flow Management Rerouting Problem (TFMRP) We present an integrated mathematical programming approach for the TFMRP, which utilizes several methodologies, in order to minimize delay costs. In order to address the high dimensionality, we present an aggregate model, in which we formulate the TFMRP as a multicommodity, integer, dynamic network flow problem with certain side constraints. Using Lagrangian relaxation, we generate aggregate flows that are decomposed into a collection of flight paths using a randomized rounding heuristic. This collection of paths is used in a packing integer
Cawley, M.F.; McGlynn, D.; Mooney, P.A.
2006-01-01
A technique is described which yields an accurate measurement of the temperature of density maximum of fluids which exhibit such anomalous behaviour. The method relies on the detection of changes in convective flow in a rectangular cavity containing the test fluid.The normal single-cell convection which occurs in the presence of a horizontal temperature gradient changes to a double cell configuration in the vicinity of the density maximum, and this transition manifests itself in changes in th...
Lattice Field Theory with the Sign Problem and the Maximum Entropy Method
Directory of Open Access Journals (Sweden)
Masahiro Imachi
2007-02-01
Full Text Available Although numerical simulation in lattice field theory is one of the most effective tools to study non-perturbative properties of field theories, it faces serious obstacles coming from the sign problem in some theories such as finite density QCD and lattice field theory with the θ term. We reconsider this problem from the point of view of the maximum entropy method.
Discrete maximum principle for FE solutions of the diffusion-reaction problem on prismatic meshes
Czech Academy of Sciences Publication Activity Database
Hannukainen, A.; Korotov, S.; Vejchodský, Tomáš
2009-01-01
Roč. 226, č. 2 (2009), s. 275-287 ISSN 0377-0427 R&D Projects: GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : diffusion-reaction problem * maximum principle * prismatic finite elements Subject RIV: BA - General Mathematics Impact factor: 1.292, year: 2009
Wood flow problems in the Swedish forestry
Energy Technology Data Exchange (ETDEWEB)
Carlsson, Dick [Forestry Research Inst. of Sweden, Uppsala (Sweden); Roennqvist, M. [Linkoeping Univ. (Sweden). Dept. of Mathematics
1998-12-31
In this paper we give an overview of the wood-flow in Sweden including a description of organization and planning. Based on that, we will describe a number of applications or problem areas in the wood-flow chain that are currently considered by the Swedish forest companies to be important and potential in order to improve overall operations. We have focused on applications which are short term planning or operative planning. We do not give any final results as much of the development is currently ongoing or is still in a planning phase. Instead we describe what kind of models and decision support systems that could be applied in order to improve co-operation within and integration of the wood-flow chain 13 refs, 20 figs, 1 tab
Finite element methods for incompressible flow problems
John, Volker
2016-01-01
This book explores finite element methods for incompressible flow problems: Stokes equations, stationary Navier-Stokes equations, and time-dependent Navier-Stokes equations. It focuses on numerical analysis, but also discusses the practical use of these methods and includes numerical illustrations. It also provides a comprehensive overview of analytical results for turbulence models. The proofs are presented step by step, allowing readers to more easily understand the analytical techniques.
Characteristics-based modelling of flow problems
International Nuclear Information System (INIS)
Saarinen, M.
1994-02-01
The method of characteristics is an exact way to proceed to the solution of hyperbolic partial differential equations. The numerical solutions, however, are obtained in the fixed computational grid where interpolations of values between the mesh points cause numerical errors. The Piecewise Linear Interpolation Method, PLIM, the utilization of which is based on the method of characteristics, has been developed to overcome these deficiencies. The thesis concentrates on the computer simulation of the two-phase flow. The main topics studied are: (1) the PLIM method has been applied to study the validity of the numerical scheme through solving various flow problems to achieve knowledge for the further development of the method, (2) the mathematical and physical validity and applicability of the two-phase flow equations based on the SFAV (Separation of the two-phase Flow According to Velocities) approach has been studied, and (3) The SFAV approach has been further developed for particular cases such as stratified horizontal two-phase flow. (63 refs., 4 figs.)
International Nuclear Information System (INIS)
Gilai, D.
1976-01-01
The Maximum Principle deals with optimization problems of systems, which are governed by ordinary differential equations, and which include constraints on the state and control variables. The development of nuclear engineering confronted the designers of reactors, shielding and other nuclear devices with many requests of optimization and savings and it was straight forward to use the Maximum Principle for solving optimization problems in nuclear engineering, in fact, it was widely used both structural concept design and dynamic control of nuclear systems. The main disadvantage of the Maximum Principle is that it is suitable only for systems which may be described by ordinary differential equations, e.g. one dimensional systems. In the present work, starting from the variational approach, the original Maximum Principle is extended to multidimensional systems, and the principle which has been derived, is of a more general form and is applicable to any system which can be defined by linear partial differential equations of any order. To check out the applicability of the extended principle, two examples are solved: the first in nuclear shield design, where the goal is to construct a shield around a neutron emitting source, using given materials, so that the total dose outside of the shielding boundaries is minimized, the second in material distribution design in the core of a power reactor, so that the power peak is minimised. For the second problem, an iterative method was developed. (B.G.)
International Nuclear Information System (INIS)
Wang, Chao; Chen, Lingen; Xia, Shaojun; Sun, Fengrui
2016-01-01
A sulphuric acid decomposition process in a tubular plug-flow reactor with fixed inlet flow rate and completely controllable exterior wall temperature profile and reactants pressure profile is studied in this paper by using finite-time thermodynamics. The maximum production rate of the aimed product SO 2 and the optimal exterior wall temperature profile and reactants pressure profile are obtained by using nonlinear programming method. Then the optimal reactor with the maximum production rate is compared with the reference reactor with linear exterior wall temperature profile and the optimal reactor with minimum entropy generation rate. The result shows that the production rate of SO 2 of optimal reactor with the maximum production rate has an increase of more than 7%. The optimization of temperature profile has little influence on the production rate while the optimization of reactants pressure profile can significantly increase the production rate. The results obtained may provide some guidelines for the design of real tubular reactors. - Highlights: • Sulphuric acid decomposition process in tubular plug-flow reactor is studied. • Fixed inlet flow rate and controllable temperature and pressure profiles are set. • Maximum production rate of aimed product SO 2 is obtained. • Corresponding optimal temperature and pressure profiles are derived. • Production rate of SO 2 of optimal reactor increases by 7%.
Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems
International Nuclear Information System (INIS)
Stipanović, Dušan M.; Tomlin, Claire J.; Leitmann, George
2012-01-01
In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.
Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems
Energy Technology Data Exchange (ETDEWEB)
Stipanovic, Dusan M., E-mail: dusan@illinois.edu [University of Illinois at Urbana-Champaign, Coordinated Science Laboratory, Department of Industrial and Enterprise Systems Engineering (United States); Tomlin, Claire J., E-mail: tomlin@eecs.berkeley.edu [University of California at Berkeley, Department of Electrical Engineering and Computer Science (United States); Leitmann, George, E-mail: gleit@berkeley.edu [University of California at Berkeley, College of Engineering (United States)
2012-12-15
In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.
DEFF Research Database (Denmark)
Cetin, Bilge Kartal; Prasad, Neeli R.; Prasad, Ramjee
2011-01-01
In wireless sensor networks, one of the key challenge is to achieve minimum energy consumption in order to maximize network lifetime. In fact, lifetime depends on many parameters: the topology of the sensor network, the data aggregation regime in the network, the channel access schemes, the routing...... protocols, and the energy model for transmission. In this paper, we tackle the routing challenge for maximum lifetime of the sensor network. We introduce a novel linear programming approach to the maximum lifetime routing problem. To the best of our knowledge, this is the first mathematical programming...
Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo
Cheong, R. Y.; Gabda, D.
2017-09-01
Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.
Research on configuration of railway self-equipped tanker based on minimum cost maximum flow model
Yang, Yuefang; Gan, Chunhui; Shen, Tingting
2017-05-01
In the study of the configuration of the tanker of chemical logistics park, the minimum cost maximum flow model is adopted. Firstly, the transport capacity of the park loading and unloading area and the transportation demand of the dangerous goods are taken as the constraint condition of the model; then the transport arc capacity, the transport arc flow and the transport arc edge weight are determined in the transportation network diagram; finally, the software calculations. The calculation results show that the configuration issue of the tankers can be effectively solved by the minimum cost maximum flow model, which has theoretical and practical application value for tanker management of railway transportation of dangerous goods in the chemical logistics park.
Maximum a posteriori probability estimates in infinite-dimensional Bayesian inverse problems
International Nuclear Information System (INIS)
Helin, T; Burger, M
2015-01-01
A demanding challenge in Bayesian inversion is to efficiently characterize the posterior distribution. This task is problematic especially in high-dimensional non-Gaussian problems, where the structure of the posterior can be very chaotic and difficult to analyse. Current inverse problem literature often approaches the problem by considering suitable point estimators for the task. Typically the choice is made between the maximum a posteriori (MAP) or the conditional mean (CM) estimate. The benefits of either choice are not well-understood from the perspective of infinite-dimensional theory. Most importantly, there exists no general scheme regarding how to connect the topological description of a MAP estimate to a variational problem. The recent results by Dashti and others (Dashti et al 2013 Inverse Problems 29 095017) resolve this issue for nonlinear inverse problems in Gaussian framework. In this work we improve the current understanding by introducing a novel concept called the weak MAP (wMAP) estimate. We show that any MAP estimate in the sense of Dashti et al (2013 Inverse Problems 29 095017) is a wMAP estimate and, moreover, how the wMAP estimate connects to a variational formulation in general infinite-dimensional non-Gaussian problems. The variational formulation enables to study many properties of the infinite-dimensional MAP estimate that were earlier impossible to study. In a recent work by the authors (Burger and Lucka 2014 Maximum a posteriori estimates in linear inverse problems with logconcave priors are proper bayes estimators preprint) the MAP estimator was studied in the context of the Bayes cost method. Using Bregman distances, proper convex Bayes cost functions were introduced for which the MAP estimator is the Bayes estimator. Here, we generalize these results to the infinite-dimensional setting. Moreover, we discuss the implications of our results for some examples of prior models such as the Besov prior and hierarchical prior. (paper)
Two- and three-index formulations of the minimum cost multicommodity k-splittable flow problem
DEFF Research Database (Denmark)
Gamst, Mette; Jensen, Peter Neergaard; Pisinger, David
2010-01-01
The multicommodity flow problem (MCFP) considers the efficient routing of commodities from their origins to their destinations subject to capacity restrictions and edge costs. Baier et al. [G. Baier, E. Köhler, M. Skutella, On the k-splittable flow problem, in: 10th Annual European Symposium...... of commodities has to be satisfied at the lowest possible cost. The problem has applications in transportation problems where a number of commodities must be routed, using a limited number of distinct transportation units for each commodity. Based on a three-index formulation by Truffot et al. [J. Truffot, C...... on Algorithms, 2002, 101–113] introduced the maximum flow multicommodity k-splittable flow problem (MCkFP) where each commodity may use at most k paths between its origin and its destination. This paper studies the -hard minimum cost multicommodity k-splittable flow problem (MCMCkFP) in which a given flow...
Directory of Open Access Journals (Sweden)
Md. Sanaul H. Mondal
2017-03-01
Full Text Available Bangladesh shares a common border with India in the west, north and east and with Myanmar in the southeast. These borders cut across 57 rivers that discharge through Bangladesh into the Bay of Bengal in the south. The upstream courses of these rivers traverse India, China, Nepal and Bhutan. Transboundary flows are the important sources of water resources in Bangladesh. Among the 57 transboundary rivers, the Teesta is the fourth major river in Bangladesh after the Ganges, the Brahmaputra and the Meghna and Bangladesh occupies about 2071 km2 . The Teesta River floodplain in Bangladesh accounts for 14% of the total cropped area and 9.15 million people of the country. The objective of this study was to investigate trends in both maximum and minimum water flow at Kaunia and Dalia stations for the Teesta River and the coping strategies developed by the communities to adjust with uncertain flood situations. The flow characteristics of the Teesta were analysed by calculating monthly maximum and minimum water levels and discharges from 1985 to 2006. Discharge of the Teesta over the last 22 years has been decreasing. Extreme low-flow conditions were likely to occur more frequently after the implementation of the Gozoldoba Barrage by India. However, a very sharp decrease in peak flows was also observed albeit unexpected high discharge in 1988, 1989, 1991, 1997, 1999 and 2004 with some in between April and October. Onrush of water causes frequent flash floods, whereas decreasing flow leaves the areas dependent on the Teesta vulnerable to droughts. Both these extreme situations had a negative impact on the lives and livelihoods of people dependent on the Teesta. Over the years, people have developed several risk mitigation strategies to adjust with both natural and anthropogenic flood situations. This article proposed the concept of ‘MAXIN (maximum and minimum flows’ for river water justice for riparian land.
Grigioni, Mauro; Daniele, Carla; D'Avenio, Giuseppe; Barbaro, Vincenzo
2002-05-01
Turbulent flow generated by prosthetic devices at the bloodstream level may cause mechanical stress on blood particles. Measurement of the Reynolds stress tensor and/or some of its components is a mandatory step to evaluate the mechanical load on blood components exerted by fluid stresses, as well as possible consequent blood damage (hemolysis or platelet activation). Because of the three-dimensional nature of turbulence, in general, a three-component anemometer should be used to measure all components of the Reynolds stress tensor, but this is difficult, especially in vivo. The present study aimed to derive the maximum Reynolds shear stress (RSS) in three commercially available prosthetic heart valves (PHVs) of wide diffusion, starting with monodimensional data provided in vivo by echo Doppler. Accurate measurement of PHV flow field was made using laser Doppler anemometry; this provided the principal turbulence quantities (mean velocity, root-mean-square value of velocity fluctuations, average value of cross-product of velocity fluctuations in orthogonal directions) needed to quantify the maximum turbulence-related shear stress. The recorded data enabled determination of the relationship, the Reynolds stresses ratio (RSR) between maximum RSS and Reynolds normal stress in the main flow direction. The RSR was found to be dependent upon the local structure of the flow field. The reported RSR profiles, which permit a simple calculation of maximum RSS, may prove valuable during the post-implantation phase, when an assessment of valve function is made echocardiographically. Hence, the risk of damage to blood constituents associated with bileaflet valve implantation may be accurately quantified in vivo.
Adaptive boundary conditions for exterior flow problems
Boenisch, V; Wittwer, S
2003-01-01
We consider the problem of solving numerically the stationary incompressible Navier-Stokes equations in an exterior domain in two dimensions. This corresponds to studying the stationary fluid flow past a body. The necessity to truncate for numerical purposes the infinite exterior domain to a finite domain leads to the problem of finding appropriate boundary conditions on the surface of the truncated domain. We solve this problem by providing a vector field describing the leading asymptotic behavior of the solution. This vector field is given in the form of an explicit expression depending on a real parameter. We show that this parameter can be determined from the total drag exerted on the body. Using this fact we set up a self-consistent numerical scheme that determines the parameter, and hence the boundary conditions and the drag, as part of the solution process. We compare the values of the drag obtained with our adaptive scheme with the results from using traditional constant boundary conditions. Computati...
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
International Nuclear Information System (INIS)
Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa
2015-01-01
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
Energy Technology Data Exchange (ETDEWEB)
Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.
Clinical evaluation of a simple uroflowmeter for categorization of maximum urinary flow rate
Directory of Open Access Journals (Sweden)
Simon Pridgeon
2007-01-01
Full Text Available Objective: To evaluate the accuracy and diagnostic usefulness of a disposable flowmeter consisting of a plastic funnel with a spout divided into three chambers. Materials and Methods: Men with lower urinary tract symptoms (LUTS voided sequentially into a standard flowmeter and the funnel device recording maximum flow rate (Q max and voided volume (V void . The device was precalibrated such that filling of the bottom, middle and top chambers categorized maximum input flows as 15 ml s -1 respectively. Subjects who agreed to use the funnel device at home obtained readings of flow category and V void twice daily for seven days. Results: A single office reading in 46 men using the device showed good agreement with standard measurement of Q max for V void > 150 ml (Kappa = 0.68. All 14 men whose void reached the top chamber had standard Q max > 15 ml s -1 (PPV = 100%, NPV = 72% whilst eight of 12 men whose void remained in the bottom chamber had standard Q max < 10 ml s -1 (PPV = 70%, NPV = 94%. During multiple home use by 14 men the device showed moderate repeatability (Kappa = 0.58 and correctly categorized Q max in comparison to standard measurement for 12 (87% men. Conclusions: This study suggests that the device has sufficient accuracy and reliability for initial flow rate assessment in men with LUTS. The device can provide a single measurement or alternatively multiple home measurements to categorize men with Q max < 15 ml s -1 .
Directory of Open Access Journals (Sweden)
Cheng-Chin Liu
2016-01-01
Full Text Available Typhoon Morakot hit southern Taiwan in 2009, bringing 48-hr of heavy rainfall [close to the Probable Maximum Precipitation (PMP] to the Tsengwen Reservoir catchment. This extreme rainfall event resulted from the combined (co-movement effect of two climate systems (i.e., typhoon and southwesterly air flow. Based on the traditional PMP estimation method (i.e., the storm transposition method, STM, two PMP estimation approaches, i.e., Amplification Index (AI and Independent System (IS approaches, which consider the combined effect are proposed in this work. The AI approach assumes that the southwesterly air flow precipitation in a typhoon event could reach its maximum value. The IS approach assumes that the typhoon and southwesterly air flow are independent weather systems. Based on these assumptions, calculation procedures for the two approaches were constructed for a case study on the Tsengwen Reservoir catchment. The results show that the PMP estimates for 6- to 60-hr durations using the two approaches are approximately 30% larger than the PMP estimates using the traditional STM without considering the combined effect. This work is a pioneer PMP estimation method that considers the combined effect of a typhoon and southwesterly air flow. Further studies on this issue are essential and encouraged.
On the use Pontryagin's maximum principle in the reactor profiling problem
International Nuclear Information System (INIS)
Silko, P.P.
1976-01-01
The optimal given power profile approximation problem in nuclear reactors is posed as one of physical profiling problems in terms of the theory of optimal processes. It is necessary to distribute the concentration of the profiling substance in a certain nuclear reactor in such a way that the power profile obtained in the core would be as near as possible to the given profile. It is suggested that the original system of differential equations describing the behaviour of neutrons in a reactor and some applied requirements may be written in the form of usual differential equations of the first order. The integral quadratic criterion evaluating a deviation of the power profile obtained in a reactor from the given one is used as a purpose function. The initial state is given, the control aim is determined as the necessity of transfer of a control object from the initial state to the given set of finite states known as a purpose set. A class of permissible controls consists of measurable functions in the given range. On solving the formulated problem Pontryagin's maximum principle is used. As an example, the power profile flattening problem is considered, for which a program in Fortran-4 for the 'Minsk-32' computer has been written. The optimal reactor parameters calculated by this program at various boundary values of the control are presented. It is noticed that a type of the optimal reactor configuration depends on boundary values of the control
Approximation algorithms for the parallel flow shop problem
X. Zhang (Xiandong); S.L. van de Velde (Steef)
2012-01-01
textabstractWe consider the NP-hard problem of scheduling n jobs in m two-stage parallel flow shops so as to minimize the makespan. This problem decomposes into two subproblems: assigning the jobs to parallel flow shops; and scheduling the jobs assigned to the same flow shop by use of Johnson's
Approximation and hardness results for the maximum edge q-coloring problem
DEFF Research Database (Denmark)
Adamaszek, Anna Maria; Popa, Alexandru
2016-01-01
We consider the problem of coloring edges of a graph subject to the following constraints: for every vertex v, all the edges incident with v have to be colored with at most q colors. The goal is to find a coloring satisfying the above constraints and using the maximum number of colors. Notice...... ϵ>0 and any q≥2 assuming the unique games conjecture (UGC), or 1+−ϵ for any ϵ>0 and any q≥3 (≈1.19 for q=2) assuming P≠NP. These results hold even when the considered graphs are bipartite. On the algorithmic side, we restrict to the case q=2, since this is the most important in practice and we show...... a 5/3-approximation algorithm for graphs which have a perfect matching....
An electromagnetism-like method for the maximum set splitting problem
Directory of Open Access Journals (Sweden)
Kratica Jozef
2013-01-01
Full Text Available In this paper, an electromagnetism-like approach (EM for solving the maximum set splitting problem (MSSP is applied. Hybrid approach consisting of the movement based on the attraction-repulsion mechanisms combined with the proposed scaling technique directs EM to promising search regions. Fast implementation of the local search procedure additionally improves the efficiency of overall EM system. The performance of the proposed EM approach is evaluated on two classes of instances from the literature: minimum hitting set and Steiner triple systems. The results show, except in one case, that EM reaches optimal solutions up to 500 elements and 50000 subsets on minimum hitting set instances. It also reaches all optimal/best-known solutions for Steiner triple systems.
Cossio-Bolaños, Marco; Lee-Andruske, Cynthia; de Arruda, Miguel; Luarte-Rocha, Cristian; Almonacid-Fierro, Alejandro; Gómez-Campos, Rossana
2018-03-02
Maintaining and building healthy bones during the lifetime requires a complicated interaction between a number of physiological and lifestyle factors. Our goal of this study was to analyze the association between hand grip strength and the maximum peak expiratory flow with bone mineral density and content in adolescent students. The research team studied 1427 adolescent students of both sexes (750 males and 677 females) between the ages of 11.0 and 18.9 years in the Maule Region of Talca (Chile). Weight, standing height, sitting height, hand grip strength (HGS), and maximum peak expiratory flow (PEF) were measured. Furthermore, bone mineral density (BMD) and total body bone mineral content (BMC) were determined by using the Dual-Energy X-Ray Absorptiometry (DXA). Hand grip strength and PEF were categorized in tertiles (lowest, middle, and highest). Linear regression was performed in steps to analyze the relationship between the variables. Differences between categories were determined through ANOVA. In males, the hand grip strength explained 18-19% of the BMD and 20-23% of the BMC. For the females, the percentage of variation occurred between 12 and 13% of the BMD and 17-18% of the BMC. The variation of PEF for the males was observed as 33% of the BMD and 36% of the BMC. For the females, both the BMD and BMC showed a variation of 19%. The HGS and PEF were divided into three categories (lowest, middle, and highest). In both cases, significant differences occurred in bone density health between the three categories. In conclusion, the HGS and the PEF related positively to the bone density health of both sexes of adolescent students. The adolescents with poor values for hand grip strength and expiratory flow showed reduced values of BMD and BMC for the total body. Furthermore, the PEF had a greater influence on bone density health with respect to the HGS of the adolescents of both sexes.
Melak, Tilahun; Gakkhar, Sunita
2015-12-01
In spite of the implementations of several strategies, tuberculosis (TB) is overwhelmingly a serious global public health problem causing millions of infections and deaths every year. This is mainly due to the emergence of drug-resistance varieties of TB. The current treatment strategies for the drug-resistance TB are of longer duration, more expensive and have side effects. This highlights the importance of identification and prioritization of targets for new drugs. This study has been carried out to prioritize potential drug targets of Mycobacterium tuberculosis H37Rv based on their flow to resistance genes. The weighted proteome interaction network of the pathogen was constructed using a dataset from STRING database. Only a subset of the dataset with interactions that have a combined score value ≥770 was considered. Maximum flow approach has been used to prioritize potential drug targets. The potential drug targets were obtained through comparative genome and network centrality analysis. The curated set of resistance genes was retrieved from literatures. Detail literature review and additional assessment of the method were also carried out for validation. A list of 537 proteins which are essential to the pathogen and non-homologous with human was obtained from the comparative genome analysis. Through network centrality measures, 131 of them were found within the close neighborhood of the centre of gravity of the proteome network. These proteins were further prioritized based on their maximum flow value to resistance genes and they are proposed as reliable drug targets of the pathogen. Proteins which interact with the host were also identified in order to understand the infection mechanism. Potential drug targets of Mycobacterium tuberculosis H37Rv were successfully prioritized based on their flow to resistance genes of existing drugs which is believed to increase the druggability of the targets since inhibition of a protein that has a maximum flow to
Molecular Sticker Model Stimulation on Silicon for a Maximum Clique Problem
Directory of Open Access Journals (Sweden)
Jianguo Ning
2015-06-01
Full Text Available Molecular computers (also called DNA computers, as an alternative to traditional electronic computers, are smaller in size but more energy efficient, and have massive parallel processing capacity. However, DNA computers may not outperform electronic computers owing to their higher error rates and some limitations of the biological laboratory. The stickers model, as a typical DNA-based computer, is computationally complete and universal, and can be viewed as a bit-vertically operating machine. This makes it attractive for silicon implementation. Inspired by the information processing method on the stickers computer, we propose a novel parallel computing model called DEM (DNA Electronic Computing Model on System-on-a-Programmable-Chip (SOPC architecture. Except for the significant difference in the computing medium—transistor chips rather than bio-molecules—the DEM works similarly to DNA computers in immense parallel information processing. Additionally, a plasma display panel (PDP is used to show the change of solutions, and helps us directly see the distribution of assignments. The feasibility of the DEM is tested by applying it to compute a maximum clique problem (MCP with eight vertices. Owing to the limited computing sources on SOPC architecture, the DEM could solve moderate-size problems in polynomial time.
Directory of Open Access Journals (Sweden)
A. Kleidon
2013-01-01
Full Text Available The organization of drainage basins shows some reproducible phenomena, as exemplified by self-similar fractal river network structures and typical scaling laws, and these have been related to energetic optimization principles, such as minimization of stream power, minimum energy expenditure or maximum "access". Here we describe the organization and dynamics of drainage systems using thermodynamics, focusing on the generation, dissipation and transfer of free energy associated with river flow and sediment transport. We argue that the organization of drainage basins reflects the fundamental tendency of natural systems to deplete driving gradients as fast as possible through the maximization of free energy generation, thereby accelerating the dynamics of the system. This effectively results in the maximization of sediment export to deplete topographic gradients as fast as possible and potentially involves large-scale feedbacks to continental uplift. We illustrate this thermodynamic description with a set of three highly simplified models related to water and sediment flow and describe the mechanisms and feedbacks involved in the evolution and dynamics of the associated structures. We close by discussing how this thermodynamic perspective is consistent with previous approaches and the implications that such a thermodynamic description has for the understanding and prediction of sub-grid scale organization of drainage systems and preferential flow structures in general.
International Nuclear Information System (INIS)
Banach, Zbigniew; Larecki, Wieslaw
2013-01-01
The spectral formulation of the nine-moment radiation hydrodynamics resulting from using the Boltzmann entropy maximization procedure is considered. The analysis is restricted to the one-dimensional flows of a gas of massless fermions. The objective of the paper is to demonstrate that, for such flows, the spectral nine-moment maximum entropy hydrodynamics of fermionic radiation is not a purely formal theory. We first determine the domains of admissible values of the spectral moments and of the Lagrange multipliers corresponding to them. We then prove the existence of a solution to the constrained entropy optimization problem. Due to the strict concavity of the entropy functional defined on the space of distribution functions, there exists a one-to-one correspondence between the Lagrange multipliers and the moments. The maximum entropy closure of moment equations results in the symmetric conservative system of first-order partial differential equations for the Lagrange multipliers. However, this system can be transformed into the equivalent system of conservation equations for the moments. These two systems are consistent with the additional conservation equation interpreted as the balance of entropy. Exploiting the above facts, we arrive at the differential relations satisfied by the entropy function and the additional function required to close the system of moment equations. We refer to this additional function as the moment closure function. In general, the moment closure and entropy–entropy flux functions cannot be explicitly calculated in terms of the moments determining the state of a gas. Therefore, we develop a perturbation method of calculating these functions. Some additional analytical (and also numerical) results are obtained, assuming that the maximum entropy distribution function tends to the Maxwell–Boltzmann limit. (paper)
International Nuclear Information System (INIS)
Anton, V.
1979-05-01
A new formulation of multigroup cross section collapsing based on the conservation of point or zone value of hamiltonian is presented. This attempt is proper to optimization problems solved by means of maximum principle of Pontryagin. (author)
Solving the minimum flow problem with interval bounds and flows
Indian Academy of Sciences (India)
... with crisp data. In this paper, the idea of Ghiyasvand was extended for solving the minimum ﬂow problem with interval-valued lower, upper bounds and ﬂows. This problem can be solved using two minimum ﬂow problems with crisp data. Then, this result is extended to networks with fuzzy lower, upper bounds and ﬂows.
Reynolds analogy for the Rayleigh problem at various flow modes.
Abramov, A A; Butkovskii, A V
2016-07-01
The Reynolds analogy and the extended Reynolds analogy for the Rayleigh problem are considered. For a viscous incompressible fluid we derive the Reynolds analogy as a function of the Prandtl number and the Eckert number. We show that for any positive Eckert number, the Reynolds analogy as a function of the Prandtl number has a maximum. For a monatomic gas in the transitional flow regime, using the direct simulation Monte Carlo method, we investigate the extended Reynolds analogy, i.e., the relation between the shear stress and the energy flux transferred to the boundary surface, at different velocities and temperatures. We find that the extended Reynolds analogy for a rarefied monatomic gas flow with the temperature of the undisturbed gas equal to the surface temperature depends weakly on time and is close to 0.5. We show that at any fixed dimensionless time the extended Reynolds analogy depends on the plate velocity and temperature and undisturbed gas temperature mainly via the Eckert number. For Eckert numbers of the order of unity or less we generalize an extended Reynolds analogy. The generalized Reynolds analogy depends mainly only on dimensionless time for all considered Eckert numbers of the order of unity or less.
3D Topology optimization of Stokes flow problems
DEFF Research Database (Denmark)
Gersborg-Hansen, Allan; Dammann, Bernd
of energy efficient devices for 2D Stokes flow. Creeping flow problems are described by the Stokes equations which model very viscous fluids at macro scales or ordinary fluids at very small scales. The latter gives the motivation for topology optimization problems based on the Stokes equations being a model......The present talk is concerned with the application of topology optimization to creeping flow problems in 3D. This research is driven by the fact that topology optimization has proven very successful as a tool in academic and industrial design problems. Success stories are reported from such diverse...
Numberical Solution to Transient Heat Flow Problems
Kobiske, Ronald A.; Hock, Jeffrey L.
1973-01-01
Discusses the reduction of the one- and three-dimensional diffusion equation to the difference equation and its stability, convergence, and heat-flow applications under different boundary conditions. Indicates the usefulness of this presentation for beginning students of physics and engineering as well as college teachers. (CC)
A finite element method for flow problems in blast loading
International Nuclear Information System (INIS)
Forestier, A.; Lepareux, M.
1984-06-01
This paper presents a numerical method which describes fast dynamic problems in flow transient situations as in nuclear plants. A finite element formulation has been chosen; it is described by a preprocessor in CASTEM system: GIBI code. For these typical flow problems, an A.L.E. formulation for physical equations is used. So, some applications are presented: the well known problem of shock tube, the same one in 2D case and a last application to hydrogen detonation
FlowMax: A Computational Tool for Maximum Likelihood Deconvolution of CFSE Time Courses.
Directory of Open Access Journals (Sweden)
Maxim Nikolaievich Shokhirev
Full Text Available The immune response is a concerted dynamic multi-cellular process. Upon infection, the dynamics of lymphocyte populations are an aggregate of molecular processes that determine the activation, division, and longevity of individual cells. The timing of these single-cell processes is remarkably widely distributed with some cells undergoing their third division while others undergo their first. High cell-to-cell variability and technical noise pose challenges for interpreting popular dye-dilution experiments objectively. It remains an unresolved challenge to avoid under- or over-interpretation of such data when phenotyping gene-targeted mouse models or patient samples. Here we develop and characterize a computational methodology to parameterize a cell population model in the context of noisy dye-dilution data. To enable objective interpretation of model fits, our method estimates fit sensitivity and redundancy by stochastically sampling the solution landscape, calculating parameter sensitivities, and clustering to determine the maximum-likelihood solution ranges. Our methodology accounts for both technical and biological variability by using a cell fluorescence model as an adaptor during population model fitting, resulting in improved fit accuracy without the need for ad hoc objective functions. We have incorporated our methodology into an integrated phenotyping tool, FlowMax, and used it to analyze B cells from two NFκB knockout mice with distinct phenotypes; we not only confirm previously published findings at a fraction of the expended effort and cost, but reveal a novel phenotype of nfkb1/p105/50 in limiting the proliferative capacity of B cells following B-cell receptor stimulation. In addition to complementing experimental work, FlowMax is suitable for high throughput analysis of dye dilution studies within clinical and pharmacological screens with objective and quantitative conclusions.
DEFF Research Database (Denmark)
Sander, Pia; Mouritsen, L; Andersen, J Thorup
2002-01-01
OBJECTIVE: The aim of this study was to evaluate the value of routine measurements of urinary flow rate and residual urine volume as a part of a "minimal care" assessment programme for women with urinary incontinence in detecting clinical significant bladder emptying problems. MATERIAL AND METHODS....... Twenty-six per cent had a maximum flow rate less than 15 ml/s, but only 4% at a voided volume > or =200 ml. Residual urine more than 149 ml was found in 6%. Two women had chronic retention with overflow incontinence. Both had typical symptoms with continuous leakage, stranguria and chronic cystitis...
Liu, Yikan
2015-01-01
In this paper, we establish a strong maximum principle for fractional diffusion equations with multiple Caputo derivatives in time, and investigate a related inverse problem of practical importance. Exploiting the solution properties and the involved multinomial Mittag-Leffler functions, we improve the weak maximum principle for the multi-term time-fractional diffusion equation to a stronger one, which is parallel to that for its single-term counterpart as expected. As a direct application, w...
Contribution of Fuzzy Minimal Cost Flow Problem by Possibility Programming
S. Fanati Rashidi; A. A. Noora
2010-01-01
Using the concept of possibility proposed by zadeh, luhandjula ([4,8]) and buckley ([1]) have proposed the possibility programming. The formulation of buckley results in nonlinear programming problems. Negi [6]re-formulated the approach of Buckley by the use of trapezoidal fuzzy numbers and reduced the problem into fuzzy linear programming problem. Shih and Lee ([7]) used the Negi approach to solve a minimum cost flow problem, whit fuzzy costs and the upper and lower bound. ...
Topology Optimization of Large Scale Stokes Flow Problems
DEFF Research Database (Denmark)
Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan
2008-01-01
This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....
Determination of free boundary problem of flow through porous media
International Nuclear Information System (INIS)
Tavares Junior, H.M.; Souza, A.J. de
1989-01-01
This paper deals with a free boundary problem of flow through porous media, which is solved by simplicial method conbined with mesh refinement. Variational method on fixed domain is utilized. (author)
Solving Minimum Cost Multi-Commodity Network Flow Problem ...
African Journals Online (AJOL)
ADOWIE PERE
2018-03-23
Mar 23, 2018 ... network-based modeling framework for integrated fixed and mobile ... Minimum Cost Network Flow Problem (MCNFP) and some ..... Unmanned Aerial Vehicle Routing in Traffic. Incident ... Ph.D. Thesis, Dept. of Surveying &.
Using a genetic algorithm to solve fluid-flow problems
International Nuclear Information System (INIS)
Pryor, R.J.
1990-01-01
Genetic algorithms are based on the mechanics of the natural selection and natural genetics processes. These algorithms are finding increasing application to a wide variety of engineering optimization and machine learning problems. In this paper, the authors demonstrate the use of a genetic algorithm to solve fluid flow problems. Specifically, the authors use the algorithm to solve the one-dimensional flow equations for a pipe
Numerical solution of pipe flow problems for generalized Newtonian fluids
International Nuclear Information System (INIS)
Samuelsson, K.
1993-01-01
In this work we study the stationary laminar flow of incompressible generalized Newtonian fluids in a pipe with constant arbitrary cross-section. The resulting nonlinear boundary value problems can be written in a variational formulation and solved using finite elements and the augmented Lagrangian method. The solution of the boundary value problem is obtained by finding a saddle point of the augmented Lagrangian. In the algorithm the nonlinear part of the equations is treated locally and the solution is obtained by iteration between this nonlinear problem and a global linear problem. For the solution of the linear problem we use the SSOR preconditioned conjugate gradient method. The approximating problem is solved on a sequence of adaptively refined grids. A scheme for adjusting the value of the crucial penalization parameter of the augmented Lagrangian is proposed. Applications to pipe flow and a problem from the theory of capacities are given. (author) (34 refs.)
Malone, Stephen M.; McGue, Matt; Iacono, William G.
2010-01-01
Background: The maximum number of alcoholic drinks consumed in a single 24-hr period is an alcoholism-related phenotype with both face and empirical validity. It has been associated with severity of withdrawal symptoms and sensitivity to alcohol, genes implicated in alcohol metabolism, and amplitude of a measure of brain activity associated with…
Isospectral Flows for the Inhomogeneous String Density Problem
Górski, Andrzej Z.; Szmigielski, Jacek
2018-02-01
We derive isospectral flows of the mass density in the string boundary value problem corresponding to general boundary conditions. In particular, we show that certain class of rational flows produces in a suitable limit all flows generated by polynomials in negative powers of the spectral parameter. We illustrate the theory with concrete examples of isospectral flows of discrete mass densities which we prove to be Hamiltonian and for which we provide explicit solutions of equations of motion in terms of Stieltjes continued fractions and Hankel determinants.
Maximum Entropy Method in Moessbauer Spectroscopy - a Problem of Magnetic Texture
International Nuclear Information System (INIS)
Satula, D.; Szymanski, K.; Dobrzynski, L.
2011-01-01
A reconstruction of the three dimensional distribution of the hyperfine magnetic field, isomer shift and texture parameter z from the Moessbauer spectra by the maximum entropy method is presented. The method was tested on the simulated spectrum consisting of two Gaussian hyperfine field distributions with different values of the texture parameters. It is shown that proper prior has to be chosen in order to arrive at the physically meaningful results. (authors)
Contribution of Fuzzy Minimal Cost Flow Problem by Possibility Programming
Directory of Open Access Journals (Sweden)
S. Fanati Rashidi
2010-06-01
Full Text Available Using the concept of possibility proposed by zadeh, luhandjula ([4,8] and buckley ([1] have proposed the possibility programming. The formulation of buckley results in nonlinear programming problems. Negi [6]re-formulated the approach of Buckley by the use of trapezoidal fuzzy numbers and reduced the problem into fuzzy linear programming problem. Shih and Lee ([7] used the Negi approach to solve a minimum cost flow problem, whit fuzzy costs and the upper and lower bound. In this paper we shall consider the general form of this problem where all of the parameters and variables are fuzzy and also a model for solving is proposed
An approximate method of estimating the maximum saturation, the nucleation rate, and the total number nucleated per second during the laminar flow of a hot vapour–gas mixture along a tube with cold walls is described. The basis of the approach is that the temperature an...
Flow-shop scheduling problem under uncertainties: Review and trends
Eliana María González-Neira; Jairo R. Montoya-Torres; David Barrera
2017-01-01
Among the different tasks in production logistics, job scheduling is one of the most important at the operational decision-making level to enable organizations to achieve competiveness. Scheduling consists in the allocation of limited resources to activities over time in order to achieve one or more optimization objectives. Flow-shop (FS) scheduling problems encompass the sequencing processes in environments in which the activities or operations are performed in a serial flow. This type of co...
Heuristics for no-wait flow shop scheduling problem
Directory of Open Access Journals (Sweden)
Kewal Krishan Nailwal
2016-09-01
Full Text Available No-wait flow shop scheduling refers to continuous flow of jobs through different machines. The job once started should have the continuous processing through the machines without wait. This situation occurs when there is a lack of an intermediate storage between the processing of jobs on two consecutive machines. The problem of no-wait with the objective of minimizing makespan in flow shop scheduling is NP-hard; therefore the heuristic algorithms are the key to solve the problem with optimal solution or to approach nearer to optimal solution in simple manner. The paper describes two heuristics, one constructive and an improvement heuristic algorithm obtained by modifying the constructive one for sequencing n-jobs through m-machines in a flow shop under no-wait constraint with the objective of minimizing makespan. The efficiency of the proposed heuristic algorithms is tested on 120 Taillard’s benchmark problems found in the literature against the NEH under no-wait and the MNEH heuristic for no-wait flow shop problem. The improvement heuristic outperforms all heuristics on the Taillard’s instances by improving the results of NEH by 27.85%, MNEH by 22.56% and that of the proposed constructive heuristic algorithm by 24.68%. To explain the computational process of the proposed algorithm, numerical illustrations are also given in the paper. Statistical tests of significance are done in order to draw the conclusions.
Directory of Open Access Journals (Sweden)
Ahmad Zeraatkar Moghaddam
2012-01-01
Full Text Available This paper presents a mathematical model for the problem of minimizing the maximum lateness on a single machine when the deteriorated jobs are delivered to each customer in various size batches. In reality, this issue may happen within a supply chain in which delivering goods to customers entails cost. Under such situation, keeping completed jobs to deliver in batches may result in reducing delivery costs. In literature review of batch scheduling, minimizing the maximum lateness is known as NP-Hard problem; therefore the present issue aiming at minimizing the costs of delivering, in addition to the aforementioned objective function, remains an NP-Hard problem. In order to solve the proposed model, a Simulation annealing meta-heuristic is used, where the parameters are calibrated by Taguchi approach and the results are compared to the global optimal values generated by Lingo 10 software. Furthermore, in order to check the efficiency of proposed method to solve larger scales of problem, a lower bound is generated. The results are also analyzed based on the effective factors of the problem. Computational study validates the efficiency and the accuracy of the presented model.
Optimal Results and Numerical Simulations for Flow Shop Scheduling Problems
Directory of Open Access Journals (Sweden)
Tao Ren
2012-01-01
Full Text Available This paper considers the m-machine flow shop problem with two objectives: makespan with release dates and total quadratic completion time, respectively. For Fm|rj|Cmax, we prove the asymptotic optimality for any dense scheduling when the problem scale is large enough. For Fm‖ΣCj2, improvement strategy with local search is presented to promote the performance of the classical SPT heuristic. At the end of the paper, simulations show the effectiveness of the improvement strategy.
Analytical methods for heat transfer and fluid flow problems
Weigand, Bernhard
2015-01-01
This book describes useful analytical methods by applying them to real-world problems rather than solving the usual over-simplified classroom problems. The book demonstrates the applicability of analytical methods even for complex problems and guides the reader to a more intuitive understanding of approaches and solutions. Although the solution of Partial Differential Equations by numerical methods is the standard practice in industries, analytical methods are still important for the critical assessment of results derived from advanced computer simulations and the improvement of the underlying numerical techniques. Literature devoted to analytical methods, however, often focuses on theoretical and mathematical aspects and is therefore useless to most engineers. Analytical Methods for Heat Transfer and Fluid Flow Problems addresses engineers and engineering students. The second edition has been updated, the chapters on non-linear problems and on axial heat conduction problems were extended. And worked out exam...
Energy Technology Data Exchange (ETDEWEB)
Oldenburg, C.M.; Pruess, K. [Lawrence Berkeley Laboratory, Berkeley, CA (United States)
1995-03-01
We have developed TOUGH2 modules for strongly coupled flow and transport that include full hydrodynamic dispersion. T2DM models tow-dimensional flow and transport in systems with variable salinity, while T32DMR includes radionuclide transport with first-order decay of a parent-daughter chain of radionuclide components in variable salinity systems. T2DM has been applied to a variety of coupled flow problems including the pure solutal convection problem of Elder and the mixed free and forced convection salt-dome flow problem. In the Elder and salt-dome flow problems, density changes of up to 20% caused by brine concentration variations lead to strong coupling between the velocity and brine concentration fields. T2DM efficiently calculates flow and transport for these problems. We have applied T2DMR to the dispersive transport and decay of radionuclide tracers in flow fields with permeability heterogeneities and recirculating flows. Coupling in these problems occurs by velocity-dependent hydrodynamic dispersion. Our results show that the maximum daughter species concentration may occur fully within a recirculating or low-velocity region. In all of the problems, we observe very efficient handling of the strongly coupled flow and transport processes.
Field-aligned flows of H+ and He+ in the mid-latitude topside ionosphere at solar maximum
International Nuclear Information System (INIS)
Bailey, G.J.; Sellek, R.
1992-01-01
A time-dependent mathematical model of the Earth's ionosphere and plasmasphere has been used to investigate the field-aligned flows of H + and He + in the topside ionosphere at L = 3 during solar maximum. When the flux-tube content is low there are upward flows of H + and He + during daytime in both the winter and summer topside ionospheres. During winter night-time the directions of flow are, in general, downwards for He + , because of the night-time decrease in He + scale height, and upwards for H + , because of the replenishment needs of the flux tube. In the winter topside ionosphere, during the later stages of flux-tube replenishment, H + generally flows downwards during both day and night as a result of the greater plasma pressure in the summer hemisphere whilst He + flows upwards during the day and downwards at night. In the summer topside ionosphere H + flows upward to replace the H + lost from the plasmasphere to the winter topside ionosphere whilst the winter helium bulge leads to flows of He + that are in the direction winter hemisphere to summer hemisphere. When the flux-tube content is low, counterstreaming of H + and He + , with H + flowing upwards and He + downwards, occurs for most of the day above about 5000 km altitude in the summer hemisphere. There are occurrences of this type of counterstreaming in both the summer and winter hemispheres during the night. When the flux-tube content is high, counterstreaming of H + and He + occurs less frequently and over smaller regions of the flux tube. There are regions in both hemispheres where H + flows downwards whilst He + flows upwards. (Author)
International Nuclear Information System (INIS)
Bizon, Nicu
2014-01-01
Highlights: • The Maximum Efficiency Point (MEP) is tracked based on air flow rate. • The proposed Extremum Seeking (ES) control assures high performances. • About 10 kW/s search speed and 99.99% stationary accuracy can be obtained. • The energy efficiency increases with 3–12%, according to the power losses. • The control strategy is robust based on self-optimizing ES scheme proposed. - Abstract: An advanced control of the air compressor for the Proton Exchange Membrane Fuel Cell (PEMFC) system is proposed in this paper based on Extremum Seeking (ES) control scheme. The FC net power is mainly depended on the air and hydrogen flow rate and pressure, and heat and water management. This paper proposes to compute the optimal value for the air flow rate based on the advanced ES control scheme in order to maximize the FC net power. In this way, the Maximum Efficiency Point (MEP) will be tracked in real time, with about 10 kW/s search speed and a stationary accuracy of 0.99. Thus, energy efficiency will be close to the maximum value that can be obtained for a given PEMFC stack and compressor group under dynamic load. It is shown that the MEP tracking allows an increasing of the FC net power with 3–12%, depending on the percentage of the FC power supplied to the compressor and the level of the load power. Simulations shows that the performances mentioned above are effective
Topology optimization of 3D Stokes flow problems
DEFF Research Database (Denmark)
Gersborg-Hansen, Allan; Sigmund, Ole; Bendsøe, Martin P.
fluid mechanics. In future practice a muTAS could be used by doctors, engineers etc. as a hand held device with short reaction time that provides on-site analysis of a flowing substance such as blood, polluted water or similar. Borrvall and Petersson [2] paved the road for using the topology...... particular at micro scales since they are easily manufacturable and maintenance free. Here we consider topology optimization of 3D Stokes flow problems which is a reasonable fluid model to use at small scales. The presentation elaborates on effects caused by 3D fluid modelling on the design. Numerical...
International Nuclear Information System (INIS)
Nigmatulin, B.I.; Soplenkov, K.I.
1978-01-01
On the basis of the concepts of two-phase dispersive flow with various structures (bubble, vapour-drop etc) in the framework of the two-speed and two-temperature one-dimension stationary model of the current with provision for phase transitions the conditions, under which a critical (maximum) flow rate of two-phase mixture is achieved during its outflowing from a channel with the pre-set geometry, have been determined. It is shown, that for the choosen set of two-phase flow equations with the known parameters of deceleration and structure one of the critical conditions is satisfied: either solution of the set of equations corresponding a critical flow rate is a special one, i.e. passes through a special point locating between minimum and outlet channel sections where the carrying phase velocity approaches the value of decelerated sound speed in the mixture or the determinator of the initial set of equations equals zero for the outlet channel sections, i.e. gradients of the main flow parameters tend to +-infinity in this section, and carrying phase velocity also approaches the value of the decelerated sound velocity in the mixture
Progress with multigrid schemes for hypersonic flow problems
International Nuclear Information System (INIS)
Radespiel, R.; Swanson, R.C.
1995-01-01
Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm employs upwind spatial discretization with explicit multistage time stepping. Two-level versions of the various multigrid algorithms are applied to the two-dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving three different hypersonic flow problems. Some new multigrid schemes based on semicoarsening strategies are shown to be quite effective in relieving the stiffness caused by the high-aspect-ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 X 10 6 and Mach numbers up to 25. 32 refs., 31 figs., 1 tab
Finite element approximation to a model problem of transonic flow
International Nuclear Information System (INIS)
Tangmanee, S.
1986-12-01
A model problem of transonic flow ''the Tricomi equation'' in Ω is contained in IR 2 bounded by the rectangular-curve boundary is posed in the form of symmetric positive differential equations. The finite element method is then applied. When the triangulation of Ω-bar is made of quadrilaterals and the approximation space is the Lagrange polynomial, we get the error estimates. 14 refs, 1 fig
Flow-shop scheduling problem under uncertainties: Review and trends
Directory of Open Access Journals (Sweden)
Eliana María González-Neira
2017-03-01
Full Text Available Among the different tasks in production logistics, job scheduling is one of the most important at the operational decision-making level to enable organizations to achieve competiveness. Scheduling consists in the allocation of limited resources to activities over time in order to achieve one or more optimization objectives. Flow-shop (FS scheduling problems encompass the sequencing processes in environments in which the activities or operations are performed in a serial flow. This type of configuration includes assembly lines and the chemical, electronic, food, and metallurgical industries, among others. Scheduling has been mostly investigated for the deterministic cases, in which all parameters are known in advance and do not vary over time. Nevertheless, in real-world situations, events are frequently subject to uncertainties that can affect the decision-making process. Thus, it is important to study scheduling and sequencing activities under uncertainties since they can cause infeasibilities and disturbances. The purpose of this paper is to provide a general overview of the FS scheduling problem under uncertainties and its role in production logistics and to draw up opportunities for further research. To this end, 100 papers about FS and flexible flow-shop scheduling problems published from 2001 to October 2016 were analyzed and classified. Trends in the reviewed literature are presented and finally some research opportunities in the field are proposed.
Hidri, Lotfi; Gharbi, Anis; Louly, Mohamed Aly
2014-01-01
We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures.
Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem
Man, J.; Li, W.; Zeng, L.; Wu, L.
2015-12-01
The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.
Directory of Open Access Journals (Sweden)
George Cristian Gruia
2013-05-01
Full Text Available In the aviation industry, propeller motor engines have a lifecycle of several thousand hours of flight and the maintenance is an important part of their lifecycle. The present article considers a multi-resource, priority-based case scheduling problem, which is applied in a Romanian manufacturing company, that repairs and maintains helicopter and airplane engines at a certain quality level imposed by the aviation standards. Given a reduced budget constraint, the management’s goal is to maximize the utilization of their resources (financial, material, space, workers, by maintaining a prior known priority rule. An Off-Line Dual Maximum Resource Bin Packing model, based on a Mixed Integer Programming model is thus presented. The obtained results show an increase with approx. 25% of the Just in Time shipping of the engines to the customers and approx. 12,5% increase in the utilization of the working area.
Directory of Open Access Journals (Sweden)
GEORGE CRISTIAN GRUIA
2013-05-01
Full Text Available In the aviation industry, propeller motor engines have a lifecycle of several thousand hours of flight and the maintenance is an important part of their lifecycle. The present article considers a multi-resource, priority-based case scheduling problem, which is applied in a Romanian manufacturing company, that repairs and maintains helicopter and airplane engines at a certain quality level imposed by the aviation standards. Given a reduced budget constraint, the management’s goal is to maximize the utilization of their resources (financial, material, space, workers, by maintaining a prior known priority rule. An Off-Line Dual Maximum Resource Bin Packing model, based on a Mixed Integer Programing model is thus presented. The obtained results show an increase with approx. 25% of the Just in Time shipping of the engines to the customers and approx. 12,5% increase in the utilization of the working area.
International Nuclear Information System (INIS)
Papoular, R.J.; Zheludev, A.; Ressouche, E.; Schweizer, J.
1995-01-01
When density distributions in crystals are reconstructed from 3D diffraction data, a problem sometimes occurs when the spatial resolution in one given direction is very small compared to that in perpendicular directions. In this case, a 2D projected density is usually reconstructed. For this task, the conventional Fourier inversion method only makes use of those structure factors measured in the projection plane. All the other structure factors contribute zero to the reconstruction of a projected density. On the contrary, the maximum-entropy method uses all the 3D data, to yield 3D-enhanced 2D projected density maps. It is even possible to reconstruct a projection in the extreme case when not one structure factor in the plane of projection is known. In the case of poor resolution along one given direction, a Fourier inversion reconstruction gives very low quality 3D densities 'smeared' in the third dimension. The application of the maximum-entropy procedure reduces the smearing significantly and reasonably well resolved projections along most directions can now be obtained from the MaxEnt 3D density. To illustrate these two ideas, particular examples based on real polarized neutron diffraction data sets are presented. (orig.)
Two phase flow problems in power station boilers
International Nuclear Information System (INIS)
Firman, E.C.
1974-01-01
The paper outlines some of the waterside thermal and hydrodynamic phenomena relating to design and operation of large boilers in central power stations. The associated programme of work is described with an outline of some results already obtained. By way of introduction, the principal features of conventional and nuclear drum boilers and once-through nuclear heat exchangers are described in so far as they pertain to this area of work. This is followed by discussion of the relevant physical phenomena and problems which arise. For example, the problem of steam entrainment from the drum into the tubes connecting it to the furnace wall tubes is related to its effects on circulation and possible mechanisms of tube failure. Other problems concern the transient associated with start-up or low load operation of plant. The requirement for improved mathematical representation of steady and dynamic performance is mentioned together with the corresponding need for data on heat transfer, pressure loss, hydrodynamic stability, consequences of deposits, etc. The paper concludes with reference to the work being carried out within the C.E.G.B. in relation to the above problems. The facilities employed and the specific studies being made on them are described: these range from field trials on operational boilers to small scale laboratory investigations of underlying two phase flow mechanisms and include high pressure water rigs and a freon rig for simulation studies
Barbaro, V; Grigioni, M; Daniele, C; D'Avenio, G; Boccanera, G
1997-11-01
The investigation of the flow field generated by cardiac valve prostheses is a necessary task to gain knowledge on the possible relationship between turbulence-derived stresses and the hemolytic and thrombogenic complications in patients after valve replacement. The study of turbulence flows downstream of cardiac prostheses, in literature, especially concerns large-sized prostheses with a variable flow regime from very low up to 6 L/min. The Food and Drug Administration draft guidance requires the study of the minimum prosthetic size at a high cardiac output to reach the maximum Reynolds number conditions. Within the framework of a national research project regarding the characterization of cardiovascular endoprostheses, an in-depth study of turbulence generated downstream of bileaflet cardiac valves is currently under way at the Laboratory of Biomedical Engineering of the Istituto Superiore di Sanita. Four models of 19 mm bileaflet valve prostheses were used: St Jude Medical HP, Edwards Tekna, Sorin Bicarbon, and CarboMedics. The prostheses were selected for the nominal Tissue Annulus Diameter as reported by manufacturers without any assessment of valve sizing method, and were mounted in aortic position. The aortic geometry was scaled for 19 mm prostheses using angiographic data. The turbulence-derived shear stresses were investigated very close to the valve (0.35 D0), using a bidimensional Laser Doppler anemometry system and applying the Principal Stress Analysis. Results concern typical turbulence quantities during a 50 ms window at peak flow in the systolic phase. Conclusions are drawn regarding the turbulence associated to valve design features, as well as the possible damage to blood constituents.
International Nuclear Information System (INIS)
Anton, V.
1979-12-01
The collapsing formulae for the optimization problems solved by means of the Pontryagin maximum principle in nuclear reactor dynamics are presented. A comparison with the corresponding formulae of the static case is given too. (author)
2012-09-13
46, 1989. [75] S. Melkote and M.S. Daskin . An integrated model of facility location and transportation network design. Transportation Research Part A ... a work of the U.S. Government and is not subject to copyright protection in the United States. AFIT/DS/ENS/12-09 THE AVERAGE NETWORK FLOW PROBLEM...focused thinking (VFT) are used sparingly, as is the case across the entirety of the supply chain literature. We provide a VFT tutorial for supply chain
The Granular Blasius Problem: High inertial number granular flows
Tsang, Jonathan; Dalziel, Stuart; Vriend, Nathalie
2017-11-01
The classical Blasius problem considers the formation of a boundary layer through the change at x = 0 from a free-slip to a no-slip boundary beneath an otherwise steady uniform flow. Discrete particle model (DPM) simulations of granular gravity currents show that a similar phenomenon exists for a steady flow over a uniformly sloped surface that is smooth upstream (allowing slip) but rough downstream (imposing a no-slip condition). The boundary layer is a region of high shear rate and therefore high inertial number I; its dynamics are governed by the asymptotic behaviour of the granular rheology as I -> ∞ . The μ(I) rheology asserts that dμ / dI = O(1 /I2) as I -> ∞ , but current experimental evidence is insufficient to confirm this. We show that `generalised μ(I) rheologies', with different behaviours as I -> ∞ , all permit the formation of a boundary layer. We give approximate solutions for the velocity profile under each rheology. The change in boundary condition considered here mimics more complex topography in which shear stress increases in the streamwise direction (e.g. a curved slope). Such a system would be of interest in avalanche modelling. EPSRC studentship (Tsang) and Royal Society Dorothy Hodgkin Fellowship (Vriend).
Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.
Ćwik, Michał; Józefczyk, Jerzy
2018-01-01
An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.
Research on network maximum flows algorithm of cascade level graph%级连层次图的网络最大流算法研究
Institute of Scientific and Technical Information of China (English)
潘荷新; 伊崇信; 李满
2011-01-01
给出一种通过构造网络级连层次图的方法,来间接求出最大网络流的算法.对于给定的有n个顶点,P条边的网络N=(G,s,t,C),该算法可在O(n2)时间内快速求出流经网络N的最大网络流及达最大流时的网络流.%This paper gives an algoritm that structures a network cascade level graph to find out maximum flow of the network indirectly.For the given network N=(G,s,t,C) that has n vetexes and e arcs,this algorithm finds out the maximum value of the network flow fast in O(n2) time that flows from the network N and the network flows when the value of the one reach maximum.
A note on Fenchel cuts for the single-node flow problem
DEFF Research Database (Denmark)
Klose, Andreas
The single-node flow problem, which is also known as the single-sink fixed-charge transportation problem, consists in finding a minimum cost flow from a number of nodes to a single sink. The flow cost comprise an amount proportional to the quantity shipped as well as a fixed charge. In this note......, some structural properties of Fenchel cutting planes for this problem are described. Such cuts might then be applied for solving, e.g., fixed-charge transportation problems and more general fixed-charge network flow problems....
Scalable Newton-Krylov solver for very large power flow problems
Idema, R.; Lahaye, D.J.P.; Vuik, C.; Van der Sluis, L.
2010-01-01
The power flow problem is generally solved by the Newton-Raphson method with a sparse direct solver for the linear system of equations in each iteration. While this works fine for small power flow problems, we will show that for very large problems the direct solver is very slow and we present
Directory of Open Access Journals (Sweden)
Mazhar A. Memon
2016-04-01
Full Text Available ABSTRACT Objective: To evaluate correlation between visual prostate score (VPSS and maximum flow rate (Qmax in men with lower urinary tract symptoms. Material and Methods: This is a cross sectional study conducted at a university Hospital. Sixty-seven adult male patients>50 years of age were enrolled in the study after signing an informed consent. Qmax and voided volume recorded at uroflowmetry graph and at the same time VPSS were assessed. The education level was assessed in various defined groups. Pearson correlation coefficient was computed for VPSS and Qmax. Results: Mean age was 66.1±10.1 years (median 68. The mean voided volume on uroflowmetry was 268±160mL (median 208 and the mean Qmax was 9.6±4.96mLs/sec (median 9.0. The mean VPSS score was 11.4±2.72 (11.0. In the univariate linear regression analysis there was strong negative (Pearson's correlation between VPSS and Qmax (r=848, p<0.001. In the multiple linear regression analyses there was a significant correlation between VPSS and Qmax (β-http://www.blogapaixonadosporviagens.com.br/p/caribe.html after adjusting the effect of age, voided volume (V.V and level of education. Multiple linear regression analysis done for independent variables and results showed that there was no significant correlation between the VPSS and independent factors including age (p=0.27, LOE (p=0.941 and V.V (p=0.082. Conclusion: There is a significant negative correlation between VPSS and Qmax. The VPSS can be used in lieu of IPSS score. Men even with limited educational background can complete VPSS without assistance.
Directory of Open Access Journals (Sweden)
Ali A. Assani
2016-01-01
Full Text Available We compared the spatiotemporal variability of temperatures and precipitation with that of the magnitude and timing of maximum daily spring flows in the geographically adjacent L’Assomption River (agricultural and Matawin River (forested watersheds during the period from 1932 to 2013. With regard to spatial variability, fall, winter, and spring temperatures as well as total precipitation are higher in the agricultural watershed than in the forested one. The magnitude of maximum daily spring flows is also higher in the first watershed as compared with the second, owing to substantial runoff, given that the amount of snow that gives rise to these flows is not significantly different in the two watersheds. These flows occur early in the season in the agricultural watershed because of the relatively high temperatures. With regard to temporal variability, minimum temperatures increased over time in both watersheds. Maximum temperatures in the fall only increased in the agricultural watershed. The amount of spring rain increased over time in both watersheds, whereas total precipitation increased significantly in the agricultural watershed only. However, the amount of snow decreased in the forested watershed. The magnitude of maximum daily spring flows increased over time in the forested watershed.
Directory of Open Access Journals (Sweden)
Xin Dai
2017-10-01
Full Text Available Maximum power transfer tracking (MPTT is meant to track the maximum power point during the system operation of wireless power transfer (WPT systems. Traditionally, MPTT is achieved by impedance matching at the secondary side when the load resistance is varied. However, due to a loosely coupling characteristic, the variation of coupling coefficient will certainly affect the performance of impedance matching, therefore MPTT will fail accordingly. This paper presents an identification method of coupling coefficient for MPTT in WPT systems. Especially, the two-value issue during the identification is considered. The identification approach is easy to implement because it does not require additional circuit. Furthermore, MPTT is easy to realize because only two easily measured DC parameters are needed. The detailed identification procedure corresponding to the two-value issue and the maximum power transfer tracking process are presented, and both the simulation analysis and experimental results verified the identification method and MPTT.
Rutkowska, Agnieszka; Kohnová, Silvia; Banasik, Kazimierz
2018-04-01
Probabilistic properties of dates of winter, summer and annual maximum flows were studied using circular statistics in three catchments differing in topographic conditions; a lowland, highland and mountainous catchment. The circular measures of location and dispersion were used in the long-term samples of dates of maxima. The mixture of von Mises distributions was assumed as the theoretical distribution function of the date of winter, summer and annual maximum flow. The number of components was selected on the basis of the corrected Akaike Information Criterion and the parameters were estimated by means of the Maximum Likelihood method. The goodness of fit was assessed using both the correlation between quantiles and a version of the Kuiper's and Watson's test. Results show that the number of components varied between catchments and it was different for seasonal and annual maxima. Differences between catchments in circular characteristics were explained using climatic factors such as precipitation and temperature. Further studies may include circular grouping catchments based on similarity between distribution functions and the linkage between dates of maximum precipitation and maximum flow.
Discrete Maximum Principle for a 1D Problem with Piecewise-Constant Coefficients Solved by hp-FEM
Czech Academy of Sciences Publication Activity Database
Vejchodský, Tomáš; Šolín, Pavel
2007-01-01
Roč. 15, č. 3 (2007), s. 233-243 ISSN 1570-2820 R&D Projects: GA ČR GP201/04/P021; GA ČR GA102/05/0629 Institutional research plan: CEZ:AV0Z10190503; CEZ:AV0Z20570509 Keywords : discrete maximum principle * hp-FEM * Poisson equation Subject RIV: BA - General Mathematics
Problems of mixed convection flow regime map in a vertical cylinder
International Nuclear Information System (INIS)
Kang, Gyeong Uk; Chung, Bum Jin
2012-01-01
One of the technical issues by the development of the VHTR is the mixed convection, which is the regime of heat transfer that occurs when the driving forces of both forced and natural convection are of comparable orders of magnitude. In vertical internal flows, the buoyancy force acts upward only, but forced flows can move either upward or downward. Thus, there are two types of mixed convection flows, depending on the direction of the forced flow. When the directions of the forced flow and buoyancy are the same, the flow is a buoyancy aided flow; when they are opposite, the flow is a buoyancy opposed flow. In laminar flows, buoyancy aided flow shows enhanced heat transfer compared to the pure forced convection and buoyancy opposed flow shows impaired heat transfer due to the flow velocity affected by the buoyancy forces. In turbulent flows, however, buoyancy opposed flows shows enhanced heat transfer due to increased turbulence production and buoyancy aided flow shows impaired heat transfer at low buoyancy forces and as the buoyancy increases, the heat transfer restores and at further increases of the buoyancy forces, the heat transfer is enhanced. It is of primary interests to classify which convection regime is mainly dominant. The methods most used to classify between forced, mixed and natural convection have been to refer to the classical flow regime map suggested by Meta is and Eckert. During the course of fundamental literature studies on this topic, it is found that there are some problems on the flow regime map in a vertical cylinder. This paper is to discuss problems identified through reviewing the papers composed in the classical flow regime map. We have tried to reproduce the flow regime map independently using the data obtained from the literatures and compared with the classical flow regime map and finally, the problems on this topic were discussed
International Nuclear Information System (INIS)
Hult, J; Mayer, S
2011-01-01
A general design of a laser light sheet module with adjustable focus is presented, where the maximum sheet width is preserved over a fixed region. In contrast, conventional focusing designs are associated with a variation in maximum sheet width with focal position. A four lens design is proposed here, where the first three lenses are employed for focusing, and the last for sheet expansion. A maximum sheet width of 1100 µm was maintained over a 50 mm long distance, for focal distances ranging from 75 to 500 mm, when a 532 nm laser beam with a beam quality factor M 2 = 29 was used for illumination
International Nuclear Information System (INIS)
Rao, D.V.; Darby, J.L.; Ross, S.B.; Clark, R.A.
1990-01-01
The High Flux Beam Reactor (HFBR) operated by Brookhaven National Laboratory (BNL) employs forced downflow for heat removal during normal operation. In the event of total loss of forced flow, the reactor will shutdown and the flow reversal valves open. When the downward core flow becomes sufficiently small then the opposing thermal buoyancy induces flow reversal leading to decay heat removal by natural convection. There is some uncertainty as to whether the natural circulation is adequate for decay heat removal after 60 MW operation. BNL- staff carried out a series of calculations to establish the adequacy of flow reversal to remove decay heat. Their calculations are based on a natural convective CHF model. The primary purpose of the present calculations is to review the accuracy and applicability of Fauske's CHF model for the HFBR, and the assumptions and methodology employed by BNL-staff to determine the heat removal limit in the HFBR during a flow reversal and natural convection situation
Solving the Liner Shipping Fleet Repositioning Problem with Cargo Flows
DEFF Research Database (Denmark)
Tierney, Kevin; Askelsdottir, Björg; Jensen, Rune Møller
2015-01-01
We solve a central problem in the liner shipping industry called the liner shipping fleet repositioning problem (LSFRP). The LSFRP poses a large financial burden on liner shipping firms. During repositioning, vessels are moved between routes in a liner shipping network. Liner carriers wish...
The Liner Shipping Fleet Repositioning Problem with Cargo Flows
DEFF Research Database (Denmark)
Tierney, Kevin; Jensen, Rune Møller
2012-01-01
We solve an important problem for the liner shipping industry called the Liner Shipping Fleet Repositioning Problem (LSFRP). The LSFRP poses a large financial burden on liner shipping firms. During repositioning, vessels are moved between services in a liner shipping network. Shippers wish...
Energy Technology Data Exchange (ETDEWEB)
Lorber, A.A.; Carey, G.F.; Bova, S.W.; Harle, C.H. [Univ. of Texas, Austin, TX (United States)
1996-12-31
The connection between the solution of linear systems of equations by iterative methods and explicit time stepping techniques is used to accelerate to steady state the solution of ODE systems arising from discretized PDEs which may involve either physical or artificial transient terms. Specifically, a class of Runge-Kutta (RK) time integration schemes with extended stability domains has been used to develop recursion formulas which lead to accelerated iterative performance. The coefficients for the RK schemes are chosen based on the theory of Chebyshev iteration polynomials in conjunction with a local linear stability analysis. We refer to these schemes as Chebyshev Parameterized Runge Kutta (CPRK) methods. CPRK methods of one to four stages are derived as functions of the parameters which describe an ellipse {Epsilon} which the stability domain of the methods is known to contain. Of particular interest are two-stage, first-order CPRK and four-stage, first-order methods. It is found that the former method can be identified with any two-stage RK method through the correct choice of parameters. The latter method is found to have a wide range of stability domains, with a maximum extension of 32 along the real axis. Recursion performance results are presented below for a model linear convection-diffusion problem as well as non-linear fluid flow problems discretized by both finite-difference and finite-element methods.
On Howard's conjecture in heterogeneous shear flow problem
Indian Academy of Sciences (India)
M. Senthilkumar (Newgen Imaging) 1461 1996 Oct 15 13:05:22
Department of Mathematics, H.P. University, Shimla 171 005, India. ∗. Sidharth Govt. Degree College, Nadaun, Dist. Hamirpur 177 033 ... in proving it in the case of the Garcia-type [3] flows wherein the basic velocity distribution has a point of ...
Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung
2017-04-01
Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.
Discrete bat algorithm for optimal problem of permutation flow shop scheduling.
Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang
2014-01-01
A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.
Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling
Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang
2014-01-01
A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220
TOUGH Simulations of the Updegraff's Set of Fluid and Heat Flow Problems
Energy Technology Data Exchange (ETDEWEB)
Moridis, G.J.; Pruess (editor), K.
1992-11-01
The TOUGH code [Pruess, 1987] for two-phase flow of water, air, and heat in penneable media has been exercised on a suite of test problems originally selected and simulated by C. D. Updegraff [1989]. These include five 'verification' problems for which analytical or numerical solutions are available, and three 'validation' problems that model laboratory fluid and heat flow experiments. All problems could be run without any code modifications (*). Good and efficient numerical performance, as well as accurate results were obtained throughout. Additional code verification and validation problems from the literature are briefly summarized, and suggestions are given for proper applications of TOUGH and related codes.
An integrated approach to combating flow assurance problems
Energy Technology Data Exchange (ETDEWEB)
Abney, Laurence; Browne, Alan [Halliburton, Houston, TX (United States)
2005-07-01
Any upset to the internal pipe surface of a pipeline can significantly impact both pipeline through-put and energy requirements for maintaining design flow rates. Inefficient flow through pipelines can have a significant negative impact on operating expense (Opex) and the energy requirements necessary to maintain pipeline through-put. Effective flow maintenance helps ensure that Opex remains within budget, processing equipment life is extended and that excessive use of energy is minimized. A number of events can result in debris generation and deposition in a pipeline. Corrosion, hydrate formation, paraffin deposition, asphaltene deposition, development of 'black powder' and scale formation are the most common sources of pipeline debris. Generally, a combination of pigging and chemical treatments is used to remove debris; these two techniques are commonly used in isolation. Incorporation of specialized fluids with enhanced solid-transport capabilities, specialized dispersants, or specialized surfactants can improve the success of routine pigging operations. An array of alternative and often complementary remediation technologies can be used to effect the removal of deposits or even full restrictions from pipelines. These include the application of acids, specialized chemical products, and intrusive interventions techniques. This paper presents a review of methods of integrating existing technologies. (author)
Literature Review on the Hybrid Flow Shop Scheduling Problem with Unrelated Parallel Machines
Directory of Open Access Journals (Sweden)
Eliana Marcela Peña Tibaduiza
2017-01-01
Full Text Available Context: The flow shop hybrid problem with unrelated parallel machines has been less studied in the academia compared to the flow shop hybrid with identical processors. For this reason, there are few reports about the kind of application of this problem in industries. Method: A literature review of the state of the art on flow-shop scheduling problem was conducted by collecting and analyzing academic papers on several scientific databases. For this aim, a search query was constructed using keywords defining the problem and checking the inclusion of unrelated parallel machines in such definition; as a result, 50 papers were finally selected for this study. Results: A classification of the problem according to the characteristics of the production system was performed, also solution methods, constraints and objective functions commonly used are presented. Conclusions: An increasing trend is observed in studies of flow shop with multiple stages, but few are based on industry case-studies.
Application of meshless EFG method in fluid flow problems
Indian Academy of Sciences (India)
Meshless method; element-free Galerkin method; steady state analysis; transient ... ﬂuid ﬂow problems using the meshless element-free Galerkin method. The unknown function of velocity u ( x ) is approximated by moving least square ...
Cyclic flow shop scheduling problem with two-machine cells
Directory of Open Access Journals (Sweden)
Bożejko Wojciech
2017-06-01
Full Text Available In the paper a variant of cyclic production with setups and two-machine cell is considered. One of the stages of the problem solving consists of assigning each operation to the machine on which it will be carried out. The total number of such assignments is exponential. We propose a polynomial time algorithm finding the optimal operations to machines assignment.
Multiphase flow problems on thermofluid safety for fusion reactors
International Nuclear Information System (INIS)
Takase, Kazuyuki
2003-01-01
As the thermofluid safety study for the International Thermonuclear Experimental Reactor (ITER), thermal-hydraulic characteristics of Tokamak fusion reactors under transient events were investigated experimentally and analyzed numerically. As severe transient events an ingress-of-coolant event (ICE) and a loss-of-vacuum event (LOVA) were considered. An integrated ICE test facility was constructed to demonstrate that the ITER safety design approach and parameters are adequate. Water-vapor two-phase flow behavior and performance of the ITER pressure suppression system during the ICE were clarified by the integrated ICE experiments. The TRAC was modified to specify the two-phase flow behavior under the ICE. The ICE experimental results were verified using the modified TRAC code. On the other hand, activated dust mobilization and air ingress characteristics in the ITER vacuum vessel during the LOVA were analyzed using a newly developed analysis code. Some physical models on the motion of dust were considered. The rate of dust released from the vacuum vessel through breaches to the outside was characterized quantitatively. The predicted average pressures in the vacuum vessel during the LOVA were in good agreement with the experimental results. Moreover, direct-contact condensation characteristics between water and vapor inside the ITER suppression tank were observed visually and simulated by the direct two-phase flow analysis. Furthermore, chemical reaction characteristics between vapor and ITER plasma-facing component materials were predicted numerically in order to obtain qualitative estimation on generation of inflammable gases such as hydrogen and methane. The experimental and numerical results of the present studies were reflected in the ITER thermofluid safety design. (author)
Design solutions to interface flow problems. Figures - Tables - Appendices
International Nuclear Information System (INIS)
1986-01-01
All published proposals for the deep level burial of radioactive waste recognise that the access shafts, tunnels and boreholes must be sealed, and that the sealing of these openings plays an integral role in the overall isolation of the waste. Previous studies have identified the interface between the host ground formation and the various sealing materials as potential defects in the overall quality of the waste isolation. The significance of groundwater flow at and near the interface has been assessed for representative conditions in generic repository materials. A range of design options to minimise the significance of flow in the interface zone have been proposed, and the most practical of these options have been selected for quantitative analysis. It has been found that isolated high impermability collars are of limited value unless a highly effective method of minimising ground disturbance during excavation can be developed. It has also been found that control of radionuclide migration by sorptive processes provides an attractive option. The effect of various geometrical arrangements of sorptive materials has been investigated. Consideration has also been given to the particular conditions in the near field, to the behaviour of weak plastic clay host formations and to the mechanical interaction between the backfill material and the host formation
Twopool strategy and the combined compressible/incompressible flow problem
International Nuclear Information System (INIS)
Sienicki, J.J.; Abramson, P.B.
1979-01-01
Most recent numerical modeling of two-phase flow involves an implicit determination of a pressure field upon which computational efficiency is strongly dependent. While cell by cell schemes (which treat the pressures in adjacent cells as known source terms) offer fast running times, permit the use of large time steps limited by a Courant condition restriction based on material velocities, and favor enhanced implicit coupling between the thermodynamic and hydrodynamic variables within individual cells, strong implicit coupling (as obtained with elimination schemes) between pressures in adjacent cells in pure single-phase liquid regions is necessary for the calculation of combined two-phase (compressible)/single-phase (incompressible) flows. The TWOPOOL strategy, which consists of a separation in the determination of a pressure field between the single-phase liquid cells where elimination is used and the two-phase cells where a cell by cell scheme is used, constitutes the fastest running strategy which permits the use of large time steps limited only by a Courant condition restriction based on material velocities
Numerical analysis of Sakiadis flow problem considering Maxwell nanofluid
Directory of Open Access Journals (Sweden)
Mustafa Meraj
2017-01-01
Full Text Available This article investigates the flow of Maxwell nanofluid over a moving plate in a calm fluid. Novel aspects of Brownian motion and thermophoresis are taken into consideration. Revised model for passive control of nanoparticle volume fraction at the plate is used in this study. The formulated differential system is solved numerically by employing shooting approach together with fourth-fifth-order-Runge-Kutta integration procedure and Newton’s method. The solutions are greatly influenced with the variation of embedded parameters which include the local Deborah number, the Brownian motion parameter, the thermophoresis parameter, the Prandtl number, and the Schmidt number. We found that the variation in velocity distribution with an increase in local Deborah number is non-monotonic. Moreover, the reduced Nusselt number has a linear and direct relationship with the local Deborah number.
LDV-measurements in pipe-flow problems and experiences
International Nuclear Information System (INIS)
Els, H.; Rouve, G.
1985-01-01
Measurements with LDV-technique in circular cross-sections cause optical problems. When not using an index matching fluid, the index differences between air, wall and fluid cause a poor definition. Horizontal beams and vertical beams do not intersect in the same point. This makes two-component measurements impossible and gives a very bad signal quality even at forward-scatter one-component measurements. Besides the index matching, supplementary lenses can solve this problem. Lenses, which perfectly adjust the difference between horizontal and vertical beams, and difficult to calculate and - even more - to manufacture in an average equipped workshop. IWW developed a number of single-curvature lenses, which do not give perfect accordance of the beams, but increase the signal quality distinctively and thus noticeably decrease the needed time to measure a whole grid in the circular cross-section. Besides that, they are easy to produce. These lenses are described and the needed correction formula given in this paper. Other correction techniques are discussed, some measurement results with the used equipment are shown
Element Free Lattice Boltzmann Method for Fluid-Flow Problems
International Nuclear Information System (INIS)
Jo, Jong Chull; Roh, Kyung Wan; Yune, Young Gill; Kim, Hho Jhung; Kwon, Young Kwon
2007-01-01
The Lattice Boltzmann Method (LBM) has been developed for application to thermal-fluid problems. Most of the those studies considered a regular shape of lattice or mesh like square and cubic grids. In order to apply the LBM to more practical cases, it is necessary to be able to solve complex or irregular shapes of problem domains. Some techniques were based on the finite element method. Generally, the finite element method is very powerful for solving two or three-dimensional complex or irregular shapes of domains using the iso-parametric element formulation which is based on a mathematical mapping from a regular shape of element in an imaginary domain to a more general and irregular shape of element in the physical domain. In addition, the element free technique is also quite useful to analyze a complex shape of domain because there is no need to divide a domain by a compatible finite element mesh. This paper presents a new finite element and element free formulations for the lattice Boltzmann equation using the general weighted residual technique. Then, a series of validation examples are presented
Element Free Lattice Boltzmann Method for Fluid-Flow Problems
Energy Technology Data Exchange (ETDEWEB)
Jo, Jong Chull; Roh, Kyung Wan; Yune, Young Gill; Kim, Hho Jhung [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Kwon, Young Kwon [US Naval Postgraduate School, New York (United States)
2007-10-15
The Lattice Boltzmann Method (LBM) has been developed for application to thermal-fluid problems. Most of the those studies considered a regular shape of lattice or mesh like square and cubic grids. In order to apply the LBM to more practical cases, it is necessary to be able to solve complex or irregular shapes of problem domains. Some techniques were based on the finite element method. Generally, the finite element method is very powerful for solving two or three-dimensional complex or irregular shapes of domains using the iso-parametric element formulation which is based on a mathematical mapping from a regular shape of element in an imaginary domain to a more general and irregular shape of element in the physical domain. In addition, the element free technique is also quite useful to analyze a complex shape of domain because there is no need to divide a domain by a compatible finite element mesh. This paper presents a new finite element and element free formulations for the lattice Boltzmann equation using the general weighted residual technique. Then, a series of validation examples are presented.
Grid dependency of wall heat transfer for simulation of natural convection flow problems
Loomans, M.G.L.C.; Seppänen, O.; Säteri, J.
2007-01-01
In the indoor environment natural convection is a well known air flow phenomenon. In numerical simulations applying the CFD technique it is also known as a flow problem that is difficult to solve. Alternatives are available to overcome the limitations of the default approach (standard k-e model with
Topology optimization of unsteady flow problems using the lattice Boltzmann method
DEFF Research Database (Denmark)
Nørgaard, Sebastian Arlund; Sigmund, Ole; Lazarov, Boyan Stefanov
2016-01-01
This article demonstrates and discusses topology optimization for unsteady incompressible fluid flows. The fluid flows are simulated using the lattice Boltzmann method, and a partial bounceback model is implemented to model the transition between fluid and solid phases in the optimization problems...
On non-permutation solutions to some two machine flow shop scheduling problems
V. Strusevich (Vitaly); P.J. Zwaneveld (Peter)
1994-01-01
textabstractIn this paper, we study two versions of the two machine flow shop scheduling problem, where schedule length is to be minimized. First, we consider the two machine flow shop with setup, processing, and removal times separated. It is shown that an optimal solution need not be a permutation
On the Eikonal equation in the pedestrian flow problem
Felcman, J.; Kubera, P.
2017-07-01
We consider the Pedestrian Flow Equations (PFEs) as the coupled system formed by the Eikonal equation and the first order hyperbolic system with the source term. The hyperbolic system consists of the continuity equation and momentum equation of fluid dynamics. Specifying the social and pressure forces in the momentum equation we come to the assumption that each pedestrian is trying to move in a desired direction (e.g. to the exit in the panic situation) with a desired velocity, where his velocity and the direction of movement depend on the density of pedestrians in his neighborhood. In [1] we used the model, where the desired direction of movement is given by the solution of the Eikonal equation (more precisely by the gradient of the solution). Here we avoid the solution of the Eikonal equation, which is the novelty of the paper. Based on the fact that the solution of the Eikonal equation has the meaning of the shortest time to reach the exit, we define explicitly such a function in the framework of the Dijkstra's algorithm for the shortest path in the graph. This is done at the discrete level of the solution. As the graph we use the underlying triangulation, where the norm of each edge is density depending and has the dimension of the time. The numerical examples of the solution of the PFEs with and without the solution of the Eikonal equation are presented.
Nakatani, S; Garcia, M J; Firstenberg, M S; Rodriguez, L; Grimm, R A; Greenberg, N L; McCarthy, P M; Vandervoort, P M; Thomas, J D
1999-09-01
The study assessed whether hemodynamic parameters of left atrial (LA) systolic function could be estimated noninvasively using Doppler echocardiography. Left atrial systolic function is an important aspect of cardiac function. Doppler echocardiography can measure changes in LA volume, but has not been shown to relate to hemodynamic parameters such as the maximal value of the first derivative of the pressure (LA dP/dt(max)). Eighteen patients in sinus rhythm were studied immediately before and after open heart surgery using simultaneous LA pressure measurements and intraoperative transesophageal echocardiography. Left atrial pressure was measured with a micromanometer catheter, and LA dP/dt(max) during atrial contraction was obtained. Transmitral and pulmonary venous flow were recorded by pulsed Doppler echocardiography. Peak velocity, and mean acceleration and deceleration, and the time-velocity integral of each flow during atrial contraction was measured. The initial eight patients served as the study group to derive a multilinear regression equation to estimate LA dP/dt(max) from Doppler parameters, and the latter 10 patients served as the test group to validate the equation. A previously validated numeric model was used to confirm these results. In the study group, LA dP/dt(max) showed a linear relation with LA pressure before atrial contraction (r = 0.80, p LA. Among transmitral flow parameters, mean acceleration showed the strongest correlation with LA dP/dt(max) (r = 0.78, p LA dP/dt(max) with an r2 > 0.30. By stepwise and multiple linear regression analysis, LA dP/dt(max) was best described as follows: LA dP/dt(max) = 0.1 M-AC +/- 1.8 P-V - 4.1; r = 0.88, p LA dP/dt(max) correlated well (r = 0.90, p LA dP/dt(max) predicted by the above equation with r = 0.94. A combination of transmitral and pulmonary venous flow parameters can provide a hemodynamic assessment of LA systolic function.
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
From "E-flows" to "Sed-flows": Managing the Problem of Sediment in High Altitude Hydropower Systems
Gabbud, C.; Lane, S. N.
2017-12-01
The connections between stream hydraulics, geomorphology and ecosystems in mountain rivers have been substantially perturbed by humans, for example through flow regulation related to hydropower activities. It is well known that the ecosystem impacts downstream of hydropower dams may be managed by a properly designed compensation release or environmental flows ("e-flows"), and such flows may also include sediment considerations (e.g. to break up bed armor). However, there has been much less attention given to the ecosystem impacts of water intakes (where water is extracted and transferred for storage and/or power production), even though in many mountain systems such intakes may be prevalent. Flow intakes tend to be smaller than dams and because they fill quickly in the presence of sediment delivery, they often need to be flushed, many times within a day in Alpine glaciated catchments with high sediment yields. The associated short duration "flood" flow is characterised by very high sediment concentrations, which may drastically modify downstream habitat, both during the floods but also due to subsequent accumulation of "legacy" sediment. The impacts on flora and fauna of these systems have not been well studied. In addition, there are no guidelines established that might allow the design of "e-flows" that also treat this sediment problem, something we call "sed-flows". Through an Alpine field example, we quantify the hydrological, geomorphological, and ecosystem impacts of Alpine water transfer systems. The high sediment concentrations of these flushing flows lead to very high rates of channel disturbance downstream, superimposed upon long-term and progressive bed sediment accumulation. Monthly macroinvertebrate surveys over almost a two-year period showed that reductions in the flushing rate reduced rates of disturbance substantially, and led to rapid macroinvertebrate recovery, even in the seasons (autumn and winter) when biological activity should be reduced
Directory of Open Access Journals (Sweden)
Antonio Costa
2014-07-01
Full Text Available Production processes in Cellular Manufacturing Systems (CMS often involve groups of parts sharing the same technological requirements in terms of tooling and setup. The issue of scheduling such parts through a flow-shop production layout is known as the Flow-Shop Group Scheduling (FSGS problem or, whether setup times are sequence-dependent, the Flow-Shop Sequence-Dependent Group Scheduling (FSDGS problem. This paper addresses the FSDGS issue, proposing a hybrid metaheuristic procedure integrating features from Genetic Algorithms (GAs and Biased Random Sampling (BRS search techniques with the aim of minimizing the total flow time, i.e., the sum of completion times of all jobs. A well-known benchmark of test cases, entailing problems with two, three, and six machines, is employed for both tuning the relevant parameters of the developed procedure and assessing its performances against two metaheuristic algorithms recently presented by literature. The obtained results and a properly arranged ANOVA analysis highlight the superiority of the proposed approach in tackling the scheduling problem under investigation.
International Nuclear Information System (INIS)
Karpp, R.R.
1984-01-01
The particle solution of the problem of the symmetric impact of two compressible fluid stream is derived. The plane two-dimensional flow is assumed to be steady, and the inviscid compressible fluid is of the Chaplygin (tangent gas) type. The equations governing this flow are transformed to the hodograph plane where an exact, closed-form solution for the stream function is obtained. The distribution of fluid properties along the plane of symmetry and the shape of free surface streamlines are determined by transformation back to the physical plane. The problem of a compressible fluid jet penetrating an infinite target of similar material is also solved by considering a limiting case of this solution. Differences between compressible and incompressible flows of the type considered are illustrated
Mixed hybrid finite elements and streamline computation for the potential flow problem
Kaasschieter, E.F.; Huijben, A.J.M.
1992-01-01
An important class of problems in mathematical physics involves equations of the form -¿ · (A¿¿) = f. In a variety of problems it is desirable to obtain an accurate approximation of the flow quantity u = -A¿¿. Such an accurate approximation can be determined by the mixed finite element method. In
Existence and uniqueness of solution for a model problem of transonic flow
International Nuclear Information System (INIS)
Tangmanee, S.
1985-11-01
A model problem of transonic flow ''the Tricomi equation'' bounded by the rectangular-curve boundary is studied. We transform the model problem into a symmetric positive system and an admissible boundary condition is posed. We show that with some conditions the existence and uniqueness of the solution are guaranteed. (author)
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Blaheta, Radim; Byczanski, Petr; Karátson, J.; Ahmad, B.
2015-01-01
Roč. 280, č. 280 (2015), s. 141-157 ISSN 0377-0427 R&D Projects: GA MŠk ED1.1.00/02.0070 Institutional support: RVO:68145535 Keywords : preconditioners * heterogeneous coefficients * regularized saddle point Inner–outer iterations * Darcy flow Subject RIV: BA - General Mathematics Impact factor: 1.328, year: 2015 http://www.sciencedirect.com/science/article/pii/S0377042714005238
A review of scheduling problem and resolution methods in flexible flow shop
Directory of Open Access Journals (Sweden)
Tian-Soon Lee
2019-01-01
Full Text Available The Flexible flow shop (FFS is defined as a multi-stage flow shops with multiple parallel machines. FFS scheduling problem is a complex combinatorial problem which has been intensively studied in many real world industries. This review paper gives a comprehensive exploration review on the FFS scheduling problem and guides the reader by considering and understanding different environmental assumptions, system constraints and objective functions for future research works. The published papers are classified into two categories. First is the FFS system characteristics and constraints including the problem differences and limitation defined by different studies. Second, the scheduling performances evaluation are elaborated and categorized into time, job and multi related objectives. In addition, the resolution approaches that have been used to solve FFS scheduling problems are discussed. This paper gives a comprehensive guide for the reader with respect to future research work on the FFS scheduling problem.
Marriage in Honey Bees Optimization Algorithm for Flow-shop Problems
Directory of Open Access Journals (Sweden)
Pedro PALOMINOS
2012-01-01
Full Text Available The objective of this work is to make a comparative study of the Marriage in Honeybees Op-timization (MBO metaheuristic for flow-shop scheduling problems. This paper is focused on the design possibilities of the mating flight space shared by queens and drones. The proposed algorithm uses a 2-dimensional torus as an explicit mating space instead of the simulated an-nealing one in the original MBO. After testing different alternatives with benchmark datasets, the results show that the modeled and implemented metaheuristic is effective to solve flow-shop type problems, providing a new approach to solve other NP-Hard problems.
Heuristics methods for the flow shop scheduling problem with separated setup times
Directory of Open Access Journals (Sweden)
Marcelo Seido Nagano
2012-06-01
Full Text Available This paper deals with the permutation flow shop scheduling problem with separated machine setup times. As a result of an investigation on the problem characteristics, four heuristics methods are proposed with procedures of the construction sequencing solution by an analogy with the asymmetric traveling salesman problem with the objective of minimizing makespan. Experimental results show that one of the new heuristics methods proposed provide high quality solutions in comparisons with the evaluated methods considered in the literature.
Buddala, Raviteja; Mahapatra, Siba Sankar
2017-11-01
Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.
Lombardo, Luigi; Bachofer, F.; Cama, M.; Mä rker, M.; Rotigliano, E.
2016-01-01
This study aims at evaluating the performance of the Maximum Entropy method in assessing landslide susceptibility, exploiting topographic and multispectral remote sensing predictors. We selected the catchment of the Giampilieri stream, which is located in the north-eastern sector of Sicily (southern Italy), as test site. On 1/10/2009, a storm rainfall triggered in this area hundreds of debris flow/avalanche phenomena causing extensive economical damage and loss of life. Within this area a presence-only-based statistical method was applied to obtain susceptibility models capable of distinguish future activation sites of debris flow and debris slide, which where the main source failure mechanisms for flow or avalanche type propagation. The set of predictors used in this experiment comprised primary and secondary topographic attributes, derived by processing a high resolution digital elevation model, CORINE land cover data and a set of vegetation and mineral indices obtained by processing multispectral ASTER images. All the selected data sources are dated before the disaster. A spatially random partition technique was adopted for validation, generating fifty replicates for each of the two considered movement typologies in order to assess accuracy, precision and reliability of the models. The debris slide and debris flow susceptibility models produced high performances with the first type being the best fitted. The evaluation of the probability estimates around the mean value for each mapped pixel shows an inverted relation, with the most robust models corresponding to the debris flows. With respect to the role of each predictor within the modelling phase, debris flows appeared to be primarily controlled by topographic attributes whilst the debris slides were better explained by remotely sensed derived indices, particularly by the occurrence of previous wildfires across the slope. The overall excellent performances of the two models suggest promising perspectives for
Lombardo, Luigi
2016-07-18
This study aims at evaluating the performance of the Maximum Entropy method in assessing landslide susceptibility, exploiting topographic and multispectral remote sensing predictors. We selected the catchment of the Giampilieri stream, which is located in the north-eastern sector of Sicily (southern Italy), as test site. On 1/10/2009, a storm rainfall triggered in this area hundreds of debris flow/avalanche phenomena causing extensive economical damage and loss of life. Within this area a presence-only-based statistical method was applied to obtain susceptibility models capable of distinguish future activation sites of debris flow and debris slide, which where the main source failure mechanisms for flow or avalanche type propagation. The set of predictors used in this experiment comprised primary and secondary topographic attributes, derived by processing a high resolution digital elevation model, CORINE land cover data and a set of vegetation and mineral indices obtained by processing multispectral ASTER images. All the selected data sources are dated before the disaster. A spatially random partition technique was adopted for validation, generating fifty replicates for each of the two considered movement typologies in order to assess accuracy, precision and reliability of the models. The debris slide and debris flow susceptibility models produced high performances with the first type being the best fitted. The evaluation of the probability estimates around the mean value for each mapped pixel shows an inverted relation, with the most robust models corresponding to the debris flows. With respect to the role of each predictor within the modelling phase, debris flows appeared to be primarily controlled by topographic attributes whilst the debris slides were better explained by remotely sensed derived indices, particularly by the occurrence of previous wildfires across the slope. The overall excellent performances of the two models suggest promising perspectives for
Directory of Open Access Journals (Sweden)
GH. ŞERBAN
2016-03-01
Full Text Available The purpose of the paper is to identify and locate some species related to habitats from Pricop-Huta-Certeze and Upper Tisa Natura 2000 Protected Areas (PHCTS and to determine if they are vulnerable to risks induced by maximum flow phases. In the first chapter are mentioned few references about the morphometric parameters of the hydrographic networks within the study area, as well as some references related to the maximum flow phases frequency. After the second chapter, where methods and databases used in the study are described, we proceed to the identification of the areas that are covered by water during flood, as well as determining the risk level related to these areas. The GIS modeling reveals small extent of the flood high risk for natural environment related to protected areas and greater extent for the anthropic environment. The last chapter refers to several species of fish and batrachia, as well as to those amphibious mammals identified in the study area that are vulnerable to floods (high turbidity effect, reduction of dissolved oxygen quantity, habitats destruction etc..
Cappelli, Daniele; Mansour, Nagi N.
2012-01-01
Separation can be seen in most aerodynamic flows, but accurate prediction of separated flows is still a challenging problem for computational fluid dynamics (CFD) tools. The behavior of several Reynolds Averaged Navier-Stokes (RANS) models in predicting the separated ow over a wall-mounted hump is studied. The strengths and weaknesses of the most popular RANS models (Spalart-Allmaras, k-epsilon, k-omega, k-omega-SST) are evaluated using the open source software OpenFOAM. The hump ow modeled in this work has been documented in the 2004 CFD Validation Workshop on Synthetic Jets and Turbulent Separation Control. Only the baseline case is treated; the slot flow control cases are not considered in this paper. Particular attention is given to predicting the size of the recirculation bubble, the position of the reattachment point, and the velocity profiles downstream of the hump.
Solving implicit multi-mesh flow and conjugate heat transfer problems with RELAP-7
International Nuclear Information System (INIS)
Zou, L.; Peterson, J.; Zhao, H.; Zhang, H.; Andrs, D.; Martineau, R.
2013-01-01
The fully implicit simulation capability of RELAP-7 to solve multi-mesh flow and conjugate heat transfer problems for reactor system safety analysis is presented. Compared to general single-mesh simulations, the reactor system safety analysis-type of code has unique challenges due to its highly simplified, interconnected, one-dimensional, and zero-dimensional flow network describing multiple physics with significantly different time and length scales. To use the Jacobian-free Newton Krylov-type of solver, preconditioning is generally required for the Krylov method. The uniqueness of the reactor safety analysis-type of code in treating the interconnected flow network and conjugate heat transfer also introduces challenges in providing preconditioning matrix. Typical flow and conjugate heat transfer problems involved in reactor safety analysis using RELAP-7, as well as the special treatment on the preconditioning matrix are presented in detail. (authors)
Flow Formulation-based Model for the Curriculum-based Course Timetabling Problem
DEFF Research Database (Denmark)
Bagger, Niels-Christian Fink; Kristiansen, Simon; Sørensen, Matias
2015-01-01
problem. This decreases the number of integer variables signicantly and improves the performance compared to the basic formulation. It also shows competitiveness with other approaches based on mixed integer programming from the literature and improves the currently best known lower bound on one data...... instance in the benchmark data set from the second international timetabling competition.......In this work we will present a new mixed integer programming formulation for the curriculum-based course timetabling problem. We show that the model contains an underlying network model by dividing the problem into two models and then connecting the two models back into one model using a maximum ow...
Dual plane problems for creeping flow of power-law incompressible medium
Directory of Open Access Journals (Sweden)
Dmitriy S. Petukhov
2016-09-01
Full Text Available In this paper, we consider the class of solutions for a creeping plane flow of incompressible medium with power-law rheology, which are written in the form of the product of arbitrary power of the radial coordinate by arbitrary function of the angular coordinate of the polar coordinate system covering the plane. This class of solutions represents the asymptotics of fields in the vicinity of singular points in the domain occupied by the examined medium. We have ascertained the duality of two problems for a plane with wedge-shaped notch, at which boundaries in one of the problems the vector components of the surface force vanish, while in the other—the vanishing components are the vector components of velocity, We have investigated the asymptotics and eigensolutions of the dual nonlinear eigenvalue problems in relation to the rheological exponent and opening angle of the notch for the branch associated with the eigenvalue of the Hutchinson–Rice–Rosengren problem learned from the problem of stress distribution over a notched plane for a power law medium. In the context of the dual problem we have determined the velocity distribution in the flow of power-law medium at the vertex of a rigid wedge, We have also found another two eigenvalues, one of which was determined by V. V. Sokolovsky for the problem of power-law fluid flow in a convergent channel.
Directory of Open Access Journals (Sweden)
Jilian Wu
2013-01-01
Full Text Available We discuss several stabilized finite element methods, which are penalty, regular, multiscale enrichment, and local Gauss integration method, for the steady incompressible flow problem with damping based on the lowest equal-order finite element space pair. Then we give the numerical comparisons between them in three numerical examples which show that the local Gauss integration method has good stability, efficiency, and accuracy properties and it is better than the others for the steady incompressible flow problem with damping on the whole. However, to our surprise, the regular method spends less CPU-time and has better accuracy properties by using Crout solver.
The Planar Sandwich and Other 1D Planar Heat Flow Test Problems in ExactPack
Energy Technology Data Exchange (ETDEWEB)
Singleton, Jr., Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-01-24
This report documents the implementation of several related 1D heat flow problems in the verification package ExactPack [1]. In particular, the planar sandwich class defined in Ref. [2], as well as the classes PlanarSandwichHot, PlanarSandwichHalf, and other generalizations of the planar sandwich problem, are defined and documented here. A rather general treatment of 1D heat flow is presented, whose main results have been implemented in the class Rod1D. All planar sandwich classes are derived from the parent class Rod1D.
Energy Technology Data Exchange (ETDEWEB)
Deister, F.; Hirschel, E.H. [Univ. Stuttgart, IAG, Stuttgart (Germany); Waymel, F.; Monnoyer, F. [Univ. de Valenciennes, LME, Valenciennes (France)
2003-07-01
An automatic adaptive hybrid Cartesian grid generation and simulation system is presented together with applications. The primary computational grid is an octree Cartesian grid. A quasi-prismatic grid may be added for resolving the boundary layer region of viscous flow around the solid body. For external flow simulations the flow solver TAU from the ''deutsche zentrum fuer luft- und raumfahrt (DLR)'' is integrated in the simulation system. Coarse grids are generated automatically, which are required by the multilevel method. As an application to an internal problem the thermal and dynamic modeling of a subway station is presented. (orig.)
Kawai, T.
Among the topics discussed are the application of FEM to nonlinear free surface flow, Navier-Stokes shallow water wave equations, incompressible viscous flows and weather prediction, the mathematical analysis and characteristics of FEM, penalty function FEM, convective, viscous, and high Reynolds number FEM analyses, the solution of time-dependent, three-dimensional and incompressible Navier-Stokes equations, turbulent boundary layer flow, FEM modeling of environmental problems over complex terrain, and FEM's application to thermal convection problems and to the flow of polymeric materials in injection molding processes. Also covered are FEMs for compressible flows, including boundary layer flows and transonic flows, hybrid element approaches for wave hydrodynamic loadings, FEM acoustic field analyses, and FEM treatment of free surface flow, shallow water flow, seepage flow, and sediment transport. Boundary element methods and FEM computational technique topics are also discussed. For individual items see A84-25834 to A84-25896
A trust region interior point algorithm for optimal power flow problems
Energy Technology Data Exchange (ETDEWEB)
Wang Min [Hefei University of Technology (China). Dept. of Electrical Engineering and Automation; Liu Shengsong [Jiangsu Electric Power Dispatching and Telecommunication Company (China). Dept. of Automation
2005-05-01
This paper presents a new algorithm that uses the trust region interior point method to solve nonlinear optimal power flow (OPF) problems. The OPF problem is solved by a primal/dual interior point method with multiple centrality corrections as a sequence of linearized trust region sub-problems. It is the trust region that controls the linear step size and ensures the validity of the linear model. The convergence of the algorithm is improved through the modification of the trust region sub-problem. Numerical results of standard IEEE systems and two realistic networks ranging in size from 14 to 662 buses are presented. The computational results show that the proposed algorithm is very effective to optimal power flow applications, and favors the successive linear programming (SLP) method. Comparison with the predictor/corrector primal/dual interior point (PCPDIP) method is also made to demonstrate the superiority of the multiple centrality corrections technique. (author)
An analytical solution to the heat transfer problem in thick-walled hunt flow
International Nuclear Information System (INIS)
Bluck, Michael J; Wolfendale, Michael J
2017-01-01
Highlights: • Convective heat transfer in Hunt type flow of a liquid metal in a rectangular duct. • Analytical solution to the H1 constant peripheral temperature in a rectangular duct. • New H1 result demonstrating the enhancement of heat transfer due to flow distortion by the applied magnetic field. • Analytical solution to the H2 constant peripheral heat flux in a rectangular duct. • New H2 result demonstrating the reduction of heat transfer due to flow distortion by the applied magnetic field. • Results are important for validation of CFD in magnetohydrodynamics and for implementation of systems code approaches. - Abstract: The flow of a liquid metal in a rectangular duct, subject to a strong transverse magnetic field is of interest in a number of applications. An important application of such flows is in the context of coolants in fusion reactors, where heat is transferred to a lead-lithium eutectic. It is vital, therefore, that the heat transfer mechanisms are understood. Forced convection heat transfer is strongly dependent on the flow profile. In the hydrodynamic case, Nusselt numbers and the like, have long been well characterised in duct geometries. In the case of liquid metals in strong magnetic fields (magnetohydrodynamics), the flow profiles are very different and one can expect a concomitant effect on convective heat transfer. For fully developed laminar flows, the magnetohydrodynamic problem can be characterised in terms of two coupled partial differential equations. The problem of heat transfer for perfectly electrically insulating boundaries (Shercliff case) has been studied previously (Bluck et al., 2015). In this paper, we demonstrate corresponding analytical solutions for the case of conducting hartmann walls of arbitrary thickness. The flow is very different from the Shercliff case, exhibiting jets near the side walls and core flow suppression which have profound effects on heat transfer.
A point implicit time integration technique for slow transient flow problems
Energy Technology Data Exchange (ETDEWEB)
Kadioglu, Samet Y., E-mail: kadioglu@yildiz.edu.tr [Department of Mathematical Engineering, Yildiz Technical University, 34210 Davutpasa-Esenler, Istanbul (Turkey); Berry, Ray A., E-mail: ray.berry@inl.gov [Idaho National Laboratory, P.O. Box 1625, MS 3840, Idaho Falls, ID 83415 (United States); Martineau, Richard C. [Idaho National Laboratory, P.O. Box 1625, MS 3840, Idaho Falls, ID 83415 (United States)
2015-05-15
Highlights: • This new method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods. • It is unconditionally stable, as a fully implicit method would be. • It exhibits the simplicity of implementation of an explicit method. • It is specifically designed for slow transient flow problems of long duration such as can occur inside nuclear reactor coolant systems. • Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust. - Abstract: We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation of explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very
A point implicit time integration technique for slow transient flow problems
International Nuclear Information System (INIS)
Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.
2015-01-01
Highlights: • This new method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods. • It is unconditionally stable, as a fully implicit method would be. • It exhibits the simplicity of implementation of an explicit method. • It is specifically designed for slow transient flow problems of long duration such as can occur inside nuclear reactor coolant systems. • Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust. - Abstract: We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation of explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very
Parallel patterns determination in solving cyclic flow shop problem with setups
Directory of Open Access Journals (Sweden)
Bożejko Wojciech
2017-06-01
Full Text Available The subject of this work is the new idea of blocks for the cyclic flow shop problem with setup times, using multiple patterns with different sizes determined for each machine constituting optimal schedule of cities for the traveling salesman problem (TSP. We propose to take advantage of the Intel Xeon Phi parallel computing environment during so-called ’blocks’ determination basing on patterns, in effect significantly improving the quality of obtained results.
A Special Class of Univalent Functions in Hele-Shaw Flow Problems
Directory of Open Access Journals (Sweden)
Paula Curt
2011-01-01
Full Text Available We study the time evolution of the free boundary of a viscous fluid for planar flows in Hele-Shaw cells under injection. Applying methods from the theory of univalent functions, we prove the invariance in time of Φ-likeness property (a geometric property which includes starlikeness and spiral-likeness for two basic cases: the inner problem and the outer problem. We study both zero and nonzero surface tension models. Certain particular cases are also presented.
High order methods for incompressible fluid flow: Application to moving boundary problems
Energy Technology Data Exchange (ETDEWEB)
Bjoentegaard, Tormod
2008-04-15
Fluid flows with moving boundaries are encountered in a large number of real life situations, with two such types being fluid-structure interaction and free-surface flows. Fluid-structure phenomena are for instance apparent in many hydrodynamic applications; wave effects on offshore structures, sloshing and fluid induced vibrations, and aeroelasticity; flutter and dynamic response. Free-surface flows can be considered as a special case of a fluid-fluid interaction where one of the fluids are practically inviscid, such as air. This type of flows arise in many disciplines such as marine hydrodynamics, chemical engineering, material processing, and geophysics. The driving forces for free-surface flows may be of large scale such as gravity or inertial forces, or forces due to surface tension which operate on a much smaller scale. Free-surface flows with surface tension as a driving mechanism include the flow of bubbles and droplets, and the evolution of capillary waves. In this work we consider incompressible fluid flow, which are governed by the incompressible Navier-Stokes equations. There are several challenges when simulating moving boundary problems numerically, and these include - Spatial discretization - Temporal discretization - Imposition of boundary conditions - Solution strategy for the linear equations. These are some of the issues which will be addressed in this introduction. We will first formulate the problem in the arbitrary Lagrangian-Eulerian framework, and introduce the weak formulation of the problem. Next, we discuss the spatial and temporal discretization before we move to the imposition of surface tension boundary conditions. In the final section we discuss the solution of the resulting linear system of equations. (Author). refs., figs., tabs
A.R. Ansari; B. Hossain; B. Koren (Barry); G.I. Shishkin (Gregori)
2007-01-01
textabstractWe investigate the model problem of flow of a viscous incompressible fluid past a symmetric curved surface when the flow is parallel to its axis. This problem is known to exhibit boundary layers. Also the problem does not have solutions in closed form, it is modelled by boundary-layer
Energy Technology Data Exchange (ETDEWEB)
1979-01-01
The booklet presents the full text of 13 contributions to a Colloquium held at Karlsruhe in Sept. 1979. The main topics of the papers are the evaluation of mathematical models to solve flow problems in tide water, seas, rivers, groundwater and in the earth atmosphere. See further hints under relevant topics.
Stable Galerkin versus equal-order Galerkin least-squares elements for the stokes flow problem
International Nuclear Information System (INIS)
Franca, L.P.; Frey, S.L.; Sampaio, R.
1989-11-01
Numerical experiments are performed for the stokes flow problem employing a stable Galerkin method and a Galerkin/Least-squares method with equal-order elements. Error estimates for the methods tested herein are reviewed. The numerical results presented attest the good stability properties of all methods examined herein. (A.C.A.S.) [pt
The Cauchy problem for a model of immiscible gas flow with large data
Energy Technology Data Exchange (ETDEWEB)
Sande, Hilde
2008-12-15
The thesis consists of an introduction and two papers; 1. The solution of the Cauchy problem with large data for a model of a mixture of gases. 2. Front tracking for a model of immiscible gas flow with large data. (AG) refs, figs
A Bee Colony Optimization Approach for Mixed Blocking Constraints Flow Shop Scheduling Problems
Directory of Open Access Journals (Sweden)
Mostafa Khorramizadeh
2015-01-01
Full Text Available The flow shop scheduling problems with mixed blocking constraints with minimization of makespan are investigated. The Taguchi orthogonal arrays and path relinking along with some efficient local search methods are used to develop a metaheuristic algorithm based on bee colony optimization. In order to compare the performance of the proposed algorithm, two well-known test problems are considered. Computational results show that the presented algorithm has comparative performance with well-known algorithms of the literature, especially for the large sized problems.
A Local Search Algorithm for the Flow Shop Scheduling Problem with Release Dates
Directory of Open Access Journals (Sweden)
Tao Ren
2015-01-01
Full Text Available This paper discusses the flow shop scheduling problem to minimize the makespan with release dates. By resequencing the jobs, a modified heuristic algorithm is obtained for handling large-sized problems. Moreover, based on some properties, a local search scheme is provided to improve the heuristic to gain high-quality solution for moderate-sized problems. A sequence-independent lower bound is presented to evaluate the performance of the algorithms. A series of simulation results demonstrate the effectiveness of the proposed algorithms.
An Analytical Model for Multilayer Well Production Evaluation to Overcome Cross-Flow Problem
Hakiki, Farizal; Wibowo, Aris T.; Rahmawati, Silvya D.; Yasutra, Amega; Sukarno, Pudjo
2017-01-01
One of the major concerns in a multi-layer system is that interlayer cross-flow may occur if reservoir fluids are produced from commingled layers that have unequal initial pressures. Reservoir would commonly have bigger average reservoir pressure (pore fluid pressure) as it goes deeper. The phenomenon is, however, not followed by the reservoir productivity or injectivity. The existence of reservoir with quite low average-pressure and high injectivity would tend experiencing the cross-flow problem. It is a phenomenon of fluid from bottom layer flowing into upper layer. It would strict upper-layer fluid to flow into wellbore. It is as if there is an injection treatment from bottom layer. The study deploys productivity index an approach parameter taking into account of cross-flow problem instead of injectivity index since it is a production well. The analytical study is to model the reservoir multilayer by addressing to avoid cross-flow problem. The analytical model employed hypothetical and real field data to test it. The scope of this study are: (a) Develop mathematical-based solution to determine the production rate from each layer; (b) Assess different scenarios to optimize production rate, those are: pump setting depth and performance of in-situ choke (ISC) installation. The ISC is acting as an inflow control device (ICD) alike that help to reduce cross-flow occurrence. This study employed macro program to write the code and develop the interface. Fast iterative procedure happens on solving the analytical model. Comparison results recognized that the mathematical-based solution shows a good agreement with the commercial software derived results.
An Analytical Model for Multilayer Well Production Evaluation to Overcome Cross-Flow Problem
Hakiki, Farizal
2017-10-17
One of the major concerns in a multi-layer system is that interlayer cross-flow may occur if reservoir fluids are produced from commingled layers that have unequal initial pressures. Reservoir would commonly have bigger average reservoir pressure (pore fluid pressure) as it goes deeper. The phenomenon is, however, not followed by the reservoir productivity or injectivity. The existence of reservoir with quite low average-pressure and high injectivity would tend experiencing the cross-flow problem. It is a phenomenon of fluid from bottom layer flowing into upper layer. It would strict upper-layer fluid to flow into wellbore. It is as if there is an injection treatment from bottom layer. The study deploys productivity index an approach parameter taking into account of cross-flow problem instead of injectivity index since it is a production well. The analytical study is to model the reservoir multilayer by addressing to avoid cross-flow problem. The analytical model employed hypothetical and real field data to test it. The scope of this study are: (a) Develop mathematical-based solution to determine the production rate from each layer; (b) Assess different scenarios to optimize production rate, those are: pump setting depth and performance of in-situ choke (ISC) installation. The ISC is acting as an inflow control device (ICD) alike that help to reduce cross-flow occurrence. This study employed macro program to write the code and develop the interface. Fast iterative procedure happens on solving the analytical model. Comparison results recognized that the mathematical-based solution shows a good agreement with the commercial software derived results.
New scheduling rules for a dynamic flexible flow line problem with sequence-dependent setup times
Kia, Hamidreza; Ghodsypour, Seyed Hassan; Davoudpour, Hamid
2017-09-01
In the literature, the application of multi-objective dynamic scheduling problem and simple priority rules are widely studied. Although these rules are not efficient enough due to simplicity and lack of general insight, composite dispatching rules have a very suitable performance because they result from experiments. In this paper, a dynamic flexible flow line problem with sequence-dependent setup times is studied. The objective of the problem is minimization of mean flow time and mean tardiness. A 0-1 mixed integer model of the problem is formulated. Since the problem is NP-hard, four new composite dispatching rules are proposed to solve it by applying genetic programming framework and choosing proper operators. Furthermore, a discrete-event simulation model is made to examine the performances of scheduling rules considering four new heuristic rules and the six adapted heuristic rules from the literature. It is clear from the experimental results that composite dispatching rules that are formed from genetic programming have a better performance in minimization of mean flow time and mean tardiness than others.
MULTICRITERIA HYBRID FLOW SHOP SCHEDULING PROBLEM: LITERATURE REVIEW, ANALYSIS, AND FUTURE RESEARCH
Directory of Open Access Journals (Sweden)
Marcia de Fatima Morais
2014-12-01
Full Text Available This research focuses on the Hybrid Flow Shop production scheduling problem, which is one of the most difficult problems to solve. The literature points to several studies that focus the Hybrid Flow Shop scheduling problem with monocriteria functions. Despite of the fact that, many real world problems involve several objective functions, they can often compete and conflict, leading researchers to concentrate direct their efforts on the development of methods that take consider this variant into consideration. The goal of the study is to review and analyze the methods in order to solve the Hybrid Flow Shop production scheduling problem with multicriteria functions in the literature. The analyses were performed using several papers that have been published over the years, also the parallel machines types, the approach used to develop solution methods, the type of method develop, the objective function, the performance criterion adopted, and the additional constraints considered. The results of the reviewing and analysis of 46 papers showed opportunities for future research on this topic, including the following: (i use uniform and dedicated parallel machines, (ii use exact and metaheuristics approaches, (iv develop lower and uppers bounds, relations of dominance and different search strategies to improve the computational time of the exact methods, (v develop other types of metaheuristic, (vi work with anticipatory setups, and (vii add constraints faced by the production systems itself.
International Nuclear Information System (INIS)
Cliffe, K.A.; Garratt, T.J.; Spence, A.
1992-03-01
This paper is concerned with the problem of computing a small number of eigenvalues of large sparse generalised eigenvalue problems arising from mixed finite element discretisations of time dependent equations modelling viscous incompressible flow. The eigenvalues of importance are those with smallest real part and can be used in a scheme to determine the stability of steady state solutions and to detect Hopf bifurcations. We introduce a modified Cayley transform of the generalised eigenvalue problem which overcomes a drawback of the usual Cayley transform applied to such problems. Standard iterative methods are then applied to the transformed eigenvalue problem to compute approximations to the eigenvalue of smallest real part. Numerical experiments are performed using a model of double diffusive convection. (author)
Parallel Simulation of Three-Dimensional Free Surface Fluid Flow Problems
International Nuclear Information System (INIS)
BAER, THOMAS A.; SACKINGER, PHILIP A.; SUBIA, SAMUEL R.
1999-01-01
Simulation of viscous three-dimensional fluid flow typically involves a large number of unknowns. When free surfaces are included, the number of unknowns increases dramatically. Consequently, this class of problem is an obvious application of parallel high performance computing. We describe parallel computation of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact fines. The Galerkin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-static solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of unknowns. Other issues discussed are the proper constraints appearing along the dynamic contact line in three dimensions. Issues affecting efficient parallel simulations include problem decomposition to equally distribute computational work among a SPMD computer and determination of robust, scalable preconditioners for the distributed matrix systems that must be solved. Solution continuation strategies important for serial simulations have an enhanced relevance in a parallel coquting environment due to the difficulty of solving large scale systems. Parallel computations will be demonstrated on an example taken from the coating flow industry: flow in the vicinity of a slot coater edge. This is a three dimensional free surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another region. As such, a significant fraction of the computational time is devoted to processing boundary data. Discussion focuses on parallel speed ups for fixed problem size, a class of problems of immediate practical importance
Description of internal flow problems by a boundary integral method with dipole panels
International Nuclear Information System (INIS)
Krieg, R.; Hailfinger, G.
1979-01-01
In reactor safety studies the failure of single components is postulated or sudden accident loadings are assumed and the consequences are investigated. Often as a first consequence highly transient three dimensional flow problems occur. In contrast to classical flow problems, in most of the above cases the fluid velocities are relatively small whereas the accelerations assume high values. As a consequence both, viscosity effects and dynamic pressures which are proportional to the square of the fluid velocities are usually negligible. For cases, where the excitation times are considerably longer than the times necessary for a wave to traverse characteristic regions of the fluid field, also the fluid compressibility is negligible. Under these conditions boundary integral methods are an appropriate tool to deal with the problem. Flow singularities are distributed over the fluid boundaries in such a way that pressure and velocity fields are obtained which satisfy the boundary conditions. In order to facilitate the numerical treatment the fluid boundaries are approximated by a finite number of panels with uniform singularity distributions on each of them. Consequently the pressure and velocity field of the given problem may be obtained by superposition of the corresponding fields due to these panels with their singularity intensities as unknown factors. Then satisfying the boundary conditions in so many boundary points as panels have been introduced, yields a system of linear equations which in general allows for a unique determination of the unknown intensities. (orig./RW)
A service flow model for the liner shipping network design problem
DEFF Research Database (Denmark)
Plum, Christian Edinger Munk; Pisinger, David; Sigurd, Mikkel M.
2014-01-01
. The formulation alleviates issues faced by arc flow formulations with regards to handling multiple calls to the same port. A problem which has not been fully dealt with earlier by LSNDP formulations. Multiple calls are handled by introducing service nodes, together with port nodes in a graph representation...... of the network and a penalty for not flowed cargo. The model can be used to design liner shipping networks to utilize a container carrier’s assets efficiently and to investigate possible scenarios of changed market conditions. The model is solved as a Mixed Integer Program. Results are presented for the two...
International Nuclear Information System (INIS)
Biondi, L.
1998-01-01
The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it
Some applications of the moving finite element method to fluid flow and related problems
International Nuclear Information System (INIS)
Berry, R.A.; Williamson, R.L.
1983-01-01
The Moving Finite Element (MFE) method is applied to one-dimensional, nonlinear wave type partial differential equations which are characteristics of fluid dynamic and related flow phenomena problems. These equation systems tend to be difficult to solve because their transient solutions exhibit a spacial stiffness property, i.e., they represent physical phenomena of widely disparate length scales which must be resolved simultaneously. With the MFE method the node points automatically move (in theory) to optimal locations giving a much better approximation than can be obtained with fixed mesh methods (with a reasonable number of nodes) and with significantly reduced artificial viscosity or diffusion content. Three applications are considered. In order of increasing complexity they are: (1) a thermal quench problem, (2) an underwater explosion problem, and (3) a gas dynamics shock tube problem. The results are briefly shown
An improved sheep flock heredity algorithm for job shop scheduling and flow shop scheduling problems
Directory of Open Access Journals (Sweden)
Chandramouli Anandaraman
2011-10-01
Full Text Available Job Shop Scheduling Problem (JSSP and Flow Shop Scheduling Problem (FSSP are strong NP-complete combinatorial optimization problems among class of typical production scheduling problems. An improved Sheep Flock Heredity Algorithm (ISFHA is proposed in this paper to find a schedule of operations that can minimize makespan. In ISFHA, the pairwise mutation operation is replaced by a single point mutation process with a probabilistic property which guarantees the feasibility of the solutions in the local search domain. A Robust-Replace (R-R heuristic is introduced in place of chromosomal crossover to enhance the global search and to improve the convergence. The R-R heuristic is found to enhance the exploring potential of the algorithm and enrich the diversity of neighborhoods. Experimental results reveal the effectiveness of the proposed algorithm, whose optimization performance is markedly superior to that of genetic algorithms and is comparable to the best results reported in the literature.
New Mathematical Model and Algorithm for Economic Lot Scheduling Problem in Flexible Flow Shop
Directory of Open Access Journals (Sweden)
H. Zohali
2018-03-01
Full Text Available This paper addresses the lot sizing and scheduling problem for a number of products in flexible flow shop with identical parallel machines. The production stages are in series, while separated by finite intermediate buffers. The objective is to minimize the sum of setup and inventory holding costs per unit of time. The available mathematical model of this problem in the literature suffers from huge complexity in terms of size and computation. In this paper, a new mixed integer linear program is developed for delay with the huge dimentions of the problem. Also, a new meta heuristic algorithm is developed for the problem. The results of the numerical experiments represent a significant advantage of the proposed model and algorithm compared with the available models and algorithms in the literature.
Directory of Open Access Journals (Sweden)
Mauricio Iwama Takano
2019-01-01
Full Text Available This paper addresses the minimization of makespan for the permutation flow shop scheduling problem with blocking and sequence and machine dependent setup times, a problem not yet studied in previous studies. The 14 best known heuristics for the permutation flow shop problem with blocking and no setup times are pre-sented and then adapted to the problem in two different ways; resulting in 28 different heuristics. The heuristics are then compared using the Taillard database. As there is no other work that addresses the problem with blocking and sequence and ma-chine dependent setup times, a database for the setup times was created. The setup time value was uniformly distributed between 1% and 10%, 50%, 100% and 125% of the processing time value. Computational tests are then presented for each of the 28 heuristics, comparing the mean relative deviation of the makespan, the computational time and the percentage of successes of each method. Results show that the heuristics were capable of providing interesting results.
Vectorization on the star computer of several numerical methods for a fluid flow problem
Lambiotte, J. J., Jr.; Howser, L. M.
1974-01-01
A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.
A New Spectral Local Linearization Method for Nonlinear Boundary Layer Flow Problems
Directory of Open Access Journals (Sweden)
S. S. Motsa
2013-01-01
Full Text Available We propose a simple and efficient method for solving highly nonlinear systems of boundary layer flow problems with exponentially decaying profiles. The algorithm of the proposed method is based on an innovative idea of linearizing and decoupling the governing systems of equations and reducing them into a sequence of subsystems of differential equations which are solved using spectral collocation methods. The applicability of the proposed method, hereinafter referred to as the spectral local linearization method (SLLM, is tested on some well-known boundary layer flow equations. The numerical results presented in this investigation indicate that the proposed method, despite being easy to develop and numerically implement, is very robust in that it converges rapidly to yield accurate results and is more efficient in solving very large systems of nonlinear boundary value problems of the similarity variable boundary layer type. The accuracy and numerical stability of the SLLM can further be improved by using successive overrelaxation techniques.
Directory of Open Access Journals (Sweden)
Muhammad Fhadli
2016-12-01
This research proposed an implementation related to software execution scheduling process at a software house with Flow-Shop Problem (FSP using Artificial Bee Colony (ABC algorithm. Which in FSP required a solution to complete some job/task along with its overall cost at a minimum. There is a constraint that should be kept to note in this research, that is the uncertainty completion time of its jobs. In this research, we will present a solution that is a sequence order of project execution with its overall completion time at a minimum. An experiment will be performed with 3 attempts on each experiment conditions, that is an experiment of iteration parameter and experiment of limit parameter. From this experiment, we concluded that the use of this algorithm explained in this paper can reduce project execution time if we increase the value of total iteration and total colony. Keywords: optimization, flow-shop problem, artificial bee colony, swarm intelligence, meta-heuristic.
A New Artificial Immune System Algorithm for Multiobjective Fuzzy Flow Shop Problems
Directory of Open Access Journals (Sweden)
Cengiz Kahraman
2009-12-01
Full Text Available In this paper a new artificial immune system (AIS algorithm is proposed to solve multi objective fuzzy flow shop scheduling problems. A new mutation operator is also described for this AIS. Fuzzy sets are used to model processing times and due dates. The objectives are to minimize the average tardiness and the number of tardy jobs. The developed new AIS algorithm is tested on real world data collected at an engine cylinder liner manufacturing process. The feasibility and effectiveness of the proposed AIS is demonstrated by comparing it with genetic algorithms. Computational results demonstrate that the proposed AIS algorithm is more effective meta-heuristic for multi objective flow shop scheduling problems with fuzzy processing time and due date.
Scheduling stochastic two-machine flow shop problems to minimize expected makespan
Directory of Open Access Journals (Sweden)
Mehdi Heydari
2013-07-01
Full Text Available During the past few years, despite tremendous contribution on deterministic flow shop problem, there are only limited number of works dedicated on stochastic cases. This paper examines stochastic scheduling problems in two-machine flow shop environment for expected makespan minimization where processing times of jobs are normally distributed. Since jobs have stochastic processing times, to minimize the expected makespan, the expected sum of the second machine’s free times is minimized. In other words, by minimization waiting times for the second machine, it is possible to reach the minimum of the objective function. A mathematical method is proposed which utilizes the properties of the normal distributions. Furthermore, this method can be used as a heuristic method for other distributions, as long as the means and variances are available. The performance of the proposed method is explored using some numerical examples.
International Nuclear Information System (INIS)
Gartling, D.K.
1978-04-01
The theoretical background for the finite element computer program, NACHOS, is presented in detail. The NACHOS code is designed for the two-dimensional analysis of viscous incompressible fluid flows, including the effects of heat transfer. A general description of the fluid/thermal boundary value problems treated by the program is described. The finite element method and the associated numerical methods used in the NACHOS code are also presented. Instructions for use of the program are documented in SAND77-1334
Study of flow over object problems by a nodal discontinuous Galerkin-lattice Boltzmann method
Wu, Jie; Shen, Meng; Liu, Chen
2018-04-01
The flow over object problems are studied by a nodal discontinuous Galerkin-lattice Boltzmann method (NDG-LBM) in this work. Different from the standard lattice Boltzmann method, the current method applies the nodal discontinuous Galerkin method into the streaming process in LBM to solve the resultant pure convection equation, in which the spatial discretization is completed on unstructured grids and the low-storage explicit Runge-Kutta scheme is used for time marching. The present method then overcomes the disadvantage of standard LBM for depending on the uniform meshes. Moreover, the collision process in the LBM is completed by using the multiple-relaxation-time scheme. After the validation of the NDG-LBM by simulating the lid-driven cavity flow, the simulations of flows over a fixed circular cylinder, a stationary airfoil and rotating-stationary cylinders are performed. Good agreement of present results with previous results is achieved, which indicates that the current NDG-LBM is accurate and effective for flow over object problems.
Problems of unsteady temperature measurements in a pulsating flow of gas
International Nuclear Information System (INIS)
Olczyk, A
2008-01-01
Unsteady flow temperature is one of the most difficult and complex flow parameters to measure. Main problems concern insufficient dynamic properties of applied sensors and an interpretation of recorded signals, composed of static and dynamic temperatures. An attempt is made to solve these two problems in the case of measurements conducted in a pulsating flow of gas in the 0–200 Hz range of frequencies, which corresponds to real conditions found in exhaust pipes of modern diesel engines. As far as sensor dynamics is concerned, an analysis of requirements related to the thermometer was made, showing that there was no possibility of assuring such a high frequency band within existing solutions. Therefore, a method of double-channel correction of sensor dynamics was proposed and experimentally tested. The results correspond well with the calculations made by means of the proposed model of sensor dynamics. In the case of interpretation of the measured temperature signal, a method for distinguishing its two components was proposed. This decomposition considerably helps with a correct interpretation of unsteady flow phenomena in pipes
Managing the Budget: Stock-Flow Reasoning and the CO2 Accumulation Problem.
Newell, Ben R; Kary, Arthur; Moore, Chris; Gonzalez, Cleotilde
2016-01-01
The majority of people show persistent poor performance in reasoning about "stock-flow problems" in the laboratory. An important example is the failure to understand the relationship between the "stock" of CO2 in the atmosphere, the "inflow" via anthropogenic CO2 emissions, and the "outflow" via natural CO2 absorption. This study addresses potential causes of reasoning failures in the CO2 accumulation problem and reports two experiments involving a simple re-framing of the task as managing an analogous financial (rather than CO2 ) budget. In Experiment 1 a financial version of the task that required participants to think in terms of controlling debt demonstrated significant improvements compared to a standard CO2 accumulation problem. Experiment 2, in which participants were invited to think about managing savings, suggested that this improvement was fortuitous and coincidental rather than due to a fundamental change in understanding the stock-flow relationships. The role of graphical information in aiding or abetting stock-flow reasoning was also explored in both experiments, with the results suggesting that graphs do not always assist understanding. The potential for leveraging the kind of reasoning exhibited in such tasks in an effort to change people's willingness to reduce CO2 emissions is briefly discussed. Copyright © 2015 Cognitive Science Society, Inc.
Energy Technology Data Exchange (ETDEWEB)
Aragon-Aguilar, Alfonso; Izquierdo-Montalvo, Georgina; Pal-Verma, Mahendra; Santoyo-Gutierrez, Socrates [Instituto de Investigaciones Electricas (Mexico); Moya-Acosta, Sara L [Centro Nacional de Investigacion y Desarrollo Tecnologico (Mexico)
2009-01-15
Inflow performance relationships developed for petroleum and geothermal reservoirs are presented. Four of them were selected to be used in this work. Such relationships were developed considering features of a typical geothermal system. The performance of the selected relationships was assessed using data from production tests in several wells of different fields. A methodology is presented to determine the value of the maximum flow (W{sub max}) from the inflow relationships; its application is demonstrated using the data of the 10 production tests. It was found that the calculated value of W{sub max} under stabilization conditions may be related to the reservoir response. In general, there is a good agreement between the calculated values of W{sub max} from the different methods. The differences in the W{sub max} values vary within 10%. It was found that the stability in the calculated values of W{sub max} as a response of the reservoir is a function of the flow magnitude. So, the wells with flow greater than 200 t/h reach the stability of W{sub max} at openings 50% less of their total capacity. [Spanish] Se presentan las relaciones del comportamiento de influjo desarrolladas para yacimientos petroleros y geotermicos. Se seleccionaron cuatro de ellas para usar en este trabajo. Tales relaciones fueron desarrolladas considerando condiciones de un sistema geotermico tipico. Se analizo el comportamiento de las relaciones escogidas utilizando datos de pruebas de produccion de varios pozos de diferentes campos. Se presenta una metodologia para determinar el valor del flujo maximo (W{sub max}) a partir de las relaciones de influjo; se demuestra su aplicabilidad usando los datos de diez pruebas de produccion. Se encontro que el valor de W{sub max} calculado bajo condiciones de estabilizacion se puede relacionar con la respuesta del yacimiento. En general se encuentra buena concordancia entre los valores calculados de W{sub max} usando los diferentes metodos. Las
Algorithm Preserving Mass Fraction Maximum Principle for Multi-component Flows%多组份流动质量分数保极值原理算法
Institute of Scientific and Technical Information of China (English)
唐维军; 蒋浪; 程军波
2014-01-01
We propose a new method for compressible multi⁃component flows with Mie⁃Gruneisen equation of state based on mass fraction. The model preserves conservation law of mass, momentum and total energy for mixture flows. It also preserves conservation of mass of all single components. Moreover, it prevents pressure and velocity from jumping across interface that separate regions of different fluid components. Wave propagation method is used to discretize this quasi⁃conservation system. Modification of numerical method is adopted for conservative equation of mass fraction. This preserves the maximum principle of mass fraction. The wave propagation method which is not modified for conservation equations of flow components mass, cannot preserve the mass fraction in the interval [0,1]. Numerical results confirm validity of the method.%对基于质量分数的Mie⁃Gruneisen状态方程多流体组份模型提出了新的数值方法。该模型保持混合流体的质量、动量、和能量守恒，保持各组份分质量守恒，在多流体组份界面处保持压力和速度一致。该模型是拟守恒型方程系统。对该模型系统的离散采用波传播算法。与直接对模型中所有守恒方程采用相同算法不同的是，在处理分介质质量守恒方程时，对波传播算法进行了修正，使之满足质量分数保极值原理。而不作修改的算法则不能保证质量分数在[0，1]范围。数值实验验证了该方法有效。
Permutation flow-shop scheduling problem to optimize a quadratic objective function
Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu
2017-09-01
A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.
MHD and heat transfer benchmark problems for liquid metal flow in rectangular ducts
International Nuclear Information System (INIS)
Sidorenkov, S.I.; Hua, T.Q.; Araseki, H.
1994-01-01
Liquid metal cooling systems of a self-cooled blanket in a tokamak reactor will likely include channels of rectangular cross section where liquid metal is circulated in the presence of strong magnetic fields. MHD pressure drop, velocity distribution and heat transfer characteristics are important issues in the engineering design considerations. Computer codes for the reliable solution of three-dimensional MHD flow problems are needed for fusion relevant conditions. Argonne National Laboratory and The Efremov Institute have jointly defined several benchmark problems for code validation. The problems, described in this paper, are based on two series of rectangular duct experiments conducted at ANL; one of the series is a joint ANL/Efremov experiment. The geometries consist of variation of aspect ratio and wall thickness (thus wall conductance ratio). The transverse magnetic fields are uniform and nonuniform in the axial direction
Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags
ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu
2017-05-01
Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.
Shao, H.; Huang, Y.; Kolditz, O.
2015-12-01
Multiphase flow problems are numerically difficult to solve, as it often contains nonlinear Phase transition phenomena A conventional technique is to introduce the complementarity constraints where fluid properties such as liquid saturations are confined within a physically reasonable range. Based on such constraints, the mathematical model can be reformulated into a system of nonlinear partial differential equations coupled with variational inequalities. They can be then numerically handled by optimization algorithms. In this work, two different approaches utilizing the complementarity constraints based on persistent primary variables formulation[4] are implemented and investigated. The first approach proposed by Marchand et.al[1] is using "local complementary constraints", i.e. coupling the constraints with the local constitutive equations. The second approach[2],[3] , namely the "global complementary constrains", applies the constraints globally with the mass conservation equation. We will discuss how these two approaches are applied to solve non-isothermal componential multiphase flow problem with the phase change phenomenon. Several benchmarks will be presented for investigating the overall numerical performance of different approaches. The advantages and disadvantages of different models will also be concluded. References[1] E.Marchand, T.Mueller and P.Knabner. Fully coupled generalized hybrid-mixed finite element approximation of two-phase two-component flow in porous media. Part I: formulation and properties of the mathematical model, Computational Geosciences 17(2): 431-442, (2013). [2] A. Lauser, C. Hager, R. Helmig, B. Wohlmuth. A new approach for phase transitions in miscible multi-phase flow in porous media. Water Resour., 34,(2011), 957-966. [3] J. Jaffré, and A. Sboui. Henry's Law and Gas Phase Disappearance. Transp. Porous Media. 82, (2010), 521-526. [4] A. Bourgeat, M. Jurak and F. Smaï. Two-phase partially miscible flow and transport modeling in
A Data Flow Model to Solve the Data Distribution Changing Problem in Machine Learning
Directory of Open Access Journals (Sweden)
Shang Bo-Wen
2016-01-01
Full Text Available Continuous prediction is widely used in broad communities spreading from social to business and the machine learning method is an important method in this problem.When we use the machine learning method to predict a problem. We use the data in the training set to fit the model and estimate the distribution of data in the test set.But when we use machine learning to do the continuous prediction we get new data as time goes by and use the data to predict the future data, there may be a problem. As the size of the data set increasing over time, the distribution changes and there will be many garbage data in the training set.We should remove the garbage data as it reduces the accuracy of the prediction. The main contribution of this article is using the new data to detect the timeliness of historical data and remove the garbage data.We build a data flow model to describe how the data flow among the test set, training set, validation set and the garbage set and improve the accuracy of prediction. As the change of the data set, the best machine learning model will change.We design a hybrid voting algorithm to fit the data set better that uses seven machine learning models predicting the same problem and uses the validation set putting different weights on the learning models to give better model more weights. Experimental results show that, when the distribution of the data set changes over time, our time flow model can remove most of the garbage data and get a better result than the traditional method that adds all the data to the data set; our hybrid voting algorithm has a better prediction result than the average accuracy of other predict models
Rosewarne, P J; Wilson, J M; Svendsen, J C
2016-01-01
Metabolic rate is one of the most widely measured physiological traits in animals and may be influenced by both endogenous (e.g. body mass) and exogenous factors (e.g. oxygen availability and temperature). Standard metabolic rate (SMR) and maximum metabolic rate (MMR) are two fundamental physiological variables providing the floor and ceiling in aerobic energy metabolism. The total amount of energy available between these two variables constitutes the aerobic metabolic scope (AMS). A laboratory exercise aimed at an undergraduate level physiology class, which details the appropriate data acquisition methods and calculations to measure oxygen consumption rates in rainbow trout Oncorhynchus mykiss, is presented here. Specifically, the teaching exercise employs intermittent flow respirometry to measure SMR and MMR, derives AMS from the measurements and demonstrates how AMS is affected by environmental oxygen. Students' results typically reveal a decline in AMS in response to environmental hypoxia. The same techniques can be applied to investigate the influence of other key factors on metabolic rate (e.g. temperature and body mass). Discussion of the results develops students' understanding of the mechanisms underlying these fundamental physiological traits and the influence of exogenous factors. More generally, the teaching exercise outlines essential laboratory concepts in addition to metabolic rate calculations, data acquisition and unit conversions that enhance competency in quantitative analysis and reasoning. Finally, the described procedures are generally applicable to other fish species or aquatic breathers such as crustaceans (e.g. crayfish) and provide an alternative to using higher (or more derived) animals to investigate questions related to metabolic physiology. © 2016 The Fisheries Society of the British Isles.
A filtering technique for solving the advection equation in two-phase flow problems
International Nuclear Information System (INIS)
Devals, C.; Heniche, M.; Bertrand, F.; Tanguy, P.A.; Hayes, R.E.
2004-01-01
The aim of this work is to develop a numerical strategy for the simulation of two-phase flow in the context of chemical engineering applications. The finite element method has been chosen because of its flexibility to deal with complex geometries. One of the key points of two-phase flow simulation is to determine precisely the position of the interface between the two phases, which is an unknown of the problem. In this case, the interface can be tracked by the advection of the so-called color function. It is well known that the solution of the advection equation by most numerical schemes, including the Streamline Upwind Petrov-Galerkin (SUPG) method, may exhibit spurious oscillations. This work proposes an approach to filter out these oscillations by means of a change of variable that is efficient for both steady state and transient cases. First, the filtering technique will be presented in detail. Then, it will be applied to two-dimensional benchmark problems, namely, the advection skew to the mesh and the Zalesak's problems. (author)
Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution
Energy Technology Data Exchange (ETDEWEB)
Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhao, Changhong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zamzam, Admed S. [University of Minnesota; Sidiropoulos, Nicholas D. [University of Minnesota; Taylor, Josh A. [University of Toronto
2018-01-12
This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successive convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.
A new quantum inspired chaotic artificial bee colony algorithm for optimal power flow problem
International Nuclear Information System (INIS)
Yuan, Xiaohui; Wang, Pengtao; Yuan, Yanbin; Huang, Yuehua; Zhang, Xiaopan
2015-01-01
Highlights: • Quantum theory is introduced to artificial bee colony algorithm (ABC) to increase population diversity. • A chaotic local search operator is used to enhance local search ability of ABC. • Quantum inspired chaotic ABC method (QCABC) is proposed to solve optimal power flow. • The feasibility and effectiveness of the proposed QCABC is verified by examples. - Abstract: This paper proposes a new artificial bee colony algorithm with quantum theory and the chaotic local search strategy (QCABC), and uses it to solve the optimal power flow (OPF) problem. Under the quantum computing theory, the QCABC algorithm encodes each individual with quantum bits to form a corresponding quantum bit string. By determining each quantum bits value, we can get the value of the individual. After the scout bee stage of the artificial bee colony algorithm, we begin the chaotic local search in the vicinity of the best individual found so far. Finally, the quantum rotation gate is used to process each quantum bit so that all individuals can update toward the direction of the best individual. The QCABC algorithm is carried out to deal with the OPF problem in the IEEE 30-bus and IEEE 118-bus standard test systems. The results of the QCABC algorithm are compared with other algorithms (artificial bee colony algorithm, genetic algorithm, particle swarm optimization algorithm). The comparison shows that the QCABC algorithm can effectively solve the OPF problem and it can get the better optimal results than those of other algorithms
Directory of Open Access Journals (Sweden)
Weidong Lei
2017-01-01
Full Text Available We aim at solving the cyclic scheduling problem with a single robot and flexible processing times in a robotic flow shop, which is a well-known optimization problem in advanced manufacturing systems. The objective of the problem is to find an optimal robot move sequence such that the throughput rate is maximized. We propose a hybrid algorithm based on the Quantum-Inspired Evolutionary Algorithm (QEA and genetic operators for solving the problem. The algorithm integrates three different decoding strategies to convert quantum individuals into robot move sequences. The Q-gate is applied to update the states of Q-bits in each individual. Besides, crossover and mutation operators with adaptive probabilities are used to increase the population diversity. A repairing procedure is proposed to deal with infeasible individuals. Comparison results on both benchmark and randomly generated instances demonstrate that the proposed algorithm is more effective in solving the studied problem in terms of solution quality and computational time.
A dual exterior point simplex type algorithm for the minimum cost network flow problem
Directory of Open Access Journals (Sweden)
Geranis George
2009-01-01
Full Text Available A new dual simplex type algorithm for the Minimum Cost Network Flow Problem (MCNFP is presented. The proposed algorithm belongs to a special 'exterior- point simplex type' category. Similarly to the classical network dual simplex algorithm (NDSA, this algorithm starts with a dual feasible tree-solution and reduces the primal infeasibility, iteration by iteration. However, contrary to the NDSA, the new algorithm does not always maintain a dual feasible solution. Instead, the new algorithm might reach a basic point (tree-solution outside the dual feasible area (exterior point - dual infeasible tree.
DEFF Research Database (Denmark)
Hays, Graeme C.; Christensen, Asbjørn; Fossette, Sabrina
2014-01-01
The optimum path to follow when subjected to cross flows was first considered over 80 years ago by the German mathematician Ernst Zermelo, in the context of a boat being displaced by ocean currents, and has become known as the ‘Zermelo navigation problem’. However, the ability of migrating animals...... to solve this problem has received limited consideration, even though wind and ocean currents cause the lateral displacement of flyers and swimmers, respectively, particularly during long-distance journeys of 1000s of kilometres. Here, we examine this problem by combining long-distance, open-ocean marine...... not follow the optimum (Zermelo's) route. Even though adult marine turtles regularly complete incredible long-distance migrations, these vertebrates primarily rely on course corrections when entering neritic waters during the final stages of migration. Our work introduces a new perspective in the analysis...
On a multigrid method for the coupled Stokes and porous media flow problem
Luo, P.; Rodrigo, C.; Gaspar, F. J.; Oosterlee, C. W.
2017-07-01
The multigrid solution of coupled porous media and Stokes flow problems is considered. The Darcy equation as the saturated porous medium model is coupled to the Stokes equations by means of appropriate interface conditions. We focus on an efficient multigrid solution technique for the coupled problem, which is discretized by finite volumes on staggered grids, giving rise to a saddle point linear system. Special treatment is required regarding the discretization at the interface. An Uzawa smoother is employed in multigrid, which is a decoupled procedure based on symmetric Gauss-Seidel smoothing for velocity components and a simple Richardson iteration for the pressure field. Since a relaxation parameter is part of a Richardson iteration, Local Fourier Analysis (LFA) is applied to determine the optimal parameters. Highly satisfactory multigrid convergence is reported, and, moreover, the algorithm performs well for small values of the hydraulic conductivity and fluid viscosity, that are relevant for applications.
Directory of Open Access Journals (Sweden)
MOHAMED KEZZAR
2015-08-01
Full Text Available In this research, an efficient technique of computation considered as a modified decomposition method was proposed and then successfully applied for solving the nonlinear problem of the two dimensional flow of an incompressible viscous fluid between nonparallel plane walls. In fact this method gives the nonlinear term Nu and the solution of the studied problem as a power series. The proposed iterative procedure gives on the one hand a computationally efficient formulation with an acceleration of convergence rate and on the other hand finds the solution without any discretization, linearization or restrictive assumptions. The comparison of our results with those of numerical treatment and other earlier works shows clearly the higher accuracy and efficiency of the used Modified Decomposition Method.
On a boundary layer problem related to the gas flow in shales
Barenblatt, G. I.
2013-01-16
The development of gas deposits in shales has become a significant energy resource. Despite the already active exploitation of such deposits, a mathematical model for gas flow in shales does not exist. Such a model is crucial for optimizing the technology of gas recovery. In the present article, a boundary layer problem is formulated and investigated with respect to gas recovery from porous low-permeability inclusions in shales, which are the basic source of gas. Milton Van Dyke was a great master in the field of boundary layer problems. Dedicating this work to his memory, we want to express our belief that Van Dyke\\'s profound ideas and fundamental book Perturbation Methods in Fluid Mechanics (Parabolic Press, 1975) will live on-also in fields very far from the subjects for which they were originally invented. © 2013 US Government.
Sen, Sedat
2018-01-01
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…
Directory of Open Access Journals (Sweden)
Amir Abbas Najafi
2009-01-01
Full Text Available Resource investment problem with discounted cash flows (RIPDCFs is a class of project scheduling problem. In RIPDCF, the availability levels of the resources are considered decision variables, and the goal is to find a schedule such that the net present value of the project cash flows optimizes. In this paper, we consider a new RIPDCF in which tardiness of project is permitted with defined penalty. We mathematically formulated the problem and developed a heuristic method to solve it. The results of the performance analysis of the proposed method show an effective solution approach to the problem.
Sample problem calculations related to two-phase flow transients in a PWR relief-piping network
International Nuclear Information System (INIS)
Shin, Y.W.; Wiedermann, A.H.
1981-03-01
Two sample problems related with the fast transients of water/steam flow in the relief line of a PWR pressurizer were calculated with a network-flow analysis computer code STAC (System Transient-Flow Analysis Code). The sample problems were supplied by EPRI and are designed to test computer codes or computational methods to determine whether they have the basic capability to handle the important flow features present in a typical relief line of a PWR pressurizer. It was found necessary to implement into the STAC code a number of additional boundary conditions in order to calculate the sample problems. This includes the dynamics of the fluid interface that is treated as a moving boundary. This report describes the methodologies adopted for handling the newly implemented boundary conditions and the computational results of the two sample problems. In order to demonstrate the accuracies achieved in the STAC code results, analytical solutions are also obtained and used as a basis for comparison
Directory of Open Access Journals (Sweden)
Enrique Castillo
2015-01-01
Full Text Available A state-of-the-art review of flow observability, estimation, and prediction problems in traffic networks is performed. Since mathematical optimization provides a general framework for all of them, an integrated approach is used to perform the analysis of these problems and consider them as different optimization problems whose data, variables, constraints, and objective functions are the main elements that characterize the problems proposed by different authors. For example, counted, scanned or “a priori” data are the most common data sources; conservation laws, flow nonnegativity, link capacity, flow definition, observation, flow propagation, and specific model requirements form the most common constraints; and least squares, likelihood, possible relative error, mean absolute relative error, and so forth constitute the bases for the objective functions or metrics. The high number of possible combinations of these elements justifies the existence of a wide collection of methods for analyzing static and dynamic situations.
Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi
2017-06-01
In numerical modeling of subsurface flow and transport problems, formation properties may not be deterministically characterized, which leads to uncertainty in simulation results. In this study, we propose a sparse grid collocation method, which adopts nested quadrature rules with delay and transformation to quantify the uncertainty of model solutions. We show that the nested Kronrod-Patterson-Hermite quadrature is more efficient than the unnested Gauss-Hermite quadrature. We compare the convergence rates of various quadrature rules including the domain truncation and domain mapping approaches. To further improve accuracy and efficiency, we present a delayed process in selecting quadrature nodes and a transformed process for approximating unsmooth or discontinuous solutions. The proposed method is tested by an analytical function and in one-dimensional single-phase and two-phase flow problems with different spatial variances and correlation lengths. An additional example is given to demonstrate its applicability to three-dimensional black-oil models. It is found from these examples that the proposed method provides a promising approach for obtaining satisfactory estimation of the solution statistics and is much more efficient than the Monte-Carlo simulations.
Directory of Open Access Journals (Sweden)
Wang W
2016-12-01
Full Text Available Wei Wang, Mengshuang Xie, Shuang Dou, Liwei Cui, Wei Xiao Department of Pulmonary Medicine, Qilu Hospital, Shandong University, Jinan, People’s Republic of China Background: In a previous study, we demonstrated that asthma patients with signs of emphysema on quantitative computed tomography (CT fulfill the diagnosis of asthma-COPD overlap syndrome (ACOS. However, quantitative CT measurements of emphysema are not routinely available for patients with chronic airway disease, which limits their application. Spirometry was a widely used examination tool in clinical settings and shows emphysema as a sharp angle in the maximum expiratory flow volume (MEFV curve, called the “angle of collapse (AC”. The aim of this study was to investigate the value of the AC in the diagnosis of emphysema and ACOS. Methods: This study included 716 participants: 151 asthma patients, 173 COPD patients, and 392 normal control subjects. All the participants underwent pulmonary function tests. COPD and asthma patients also underwent quantitative CT measurements of emphysema. The AC was measured using computer models based on Matlab software. The value of the AC in the diagnosis of emphysema and ACOS was evaluated using receiver-operating characteristic (ROC curve analysis. Results: The AC of COPD patients was significantly lower than that of asthma patients and control subjects. The AC was significantly negatively correlated with emphysema index (EI; r=-0.666, P<0.001, and patients with high EI had a lower AC than those with low EI. The ROC curve analysis showed that the AC had higher diagnostic efficiency for high EI (area under the curve =0.876 than did other spirometry parameters. In asthma patients, using the AC ≤137° as a surrogate criterion for the diagnosis of ACOS, the sensitivity and specificity were 62.5% and 89.1%, respectively. Conclusion: The AC on the MEFV curve quantified by computer models correlates with the extent of emphysema. The AC may become a
MPSalsa a finite element computer program for reacting flow problems. Part 2 - user`s guide
Energy Technology Data Exchange (ETDEWEB)
Salinger, A.; Devine, K.; Hennigan, G.; Moffat, H. [and others
1996-09-01
This manual describes the use of MPSalsa, an unstructured finite element (FE) code for solving chemically reacting flow problems on massively parallel computers. MPSalsa has been written to enable the rigorous modeling of the complex geometry and physics found in engineering systems that exhibit coupled fluid flow, heat transfer, mass transfer, and detailed reactions. In addition, considerable effort has been made to ensure that the code makes efficient use of the computational resources of massively parallel (MP), distributed memory architectures in a way that is nearly transparent to the user. The result is the ability to simultaneously model both three-dimensional geometries and flow as well as detailed reaction chemistry in a timely manner on MT computers, an ability we believe to be unique. MPSalsa has been designed to allow the experienced researcher considerable flexibility in modeling a system. Any combination of the momentum equations, energy balance, and an arbitrary number of species mass balances can be solved. The physical and transport properties can be specified as constants, as functions, or taken from the Chemkin library and associated database. Any of the standard set of boundary conditions and source terms can be adapted by writing user functions, for which templates and examples exist.
A new cut-based algorithm for the multi-state flow network reliability problem
International Nuclear Information System (INIS)
Yeh, Wei-Chang; Bae, Changseok; Huang, Chia-Ling
2015-01-01
Many real-world systems can be modeled as multi-state network systems in which reliability can be derived in terms of the lower bound points of level d, called d-minimal cuts (d-MCs). This study proposes a new method to find and verify obtained d-MCs with simple and useful found properties for the multi-state flow network reliability problem. The proposed algorithm runs in O(mσp) time, which represents a significant improvement over the previous O(mp 2 σ) time bound based on max-flow/min-cut, where p, σ and m denote the number of MCs, d-MC candidates and edges, respectively. The proposed algorithm also conquers the weakness of some existing methods, which failed to remove duplicate d-MCs in special cases. A step-by-step example is given to demonstrate how the proposed algorithm locates and verifies all d-MC candidates. As evidence of the utility of the proposed approach, we present extensive computational results on 20 benchmark networks in another example. The computational results compare favorably with a previously developed algorithm in the literature. - Highlights: • A new method is proposed to find all d-MCs for the multi-state flow networks. • The proposed method can prevent the generation of d-MC duplicates. • The proposed method is simpler and more efficient than the best-known algorithms
Scale problems in assessment of hydrogeological parameters of groundwater flow models
Nawalany, Marek; Sinicyn, Grzegorz
2015-09-01
An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.
Scale problems in assessment of hydrogeological parameters of groundwater flow models
Directory of Open Access Journals (Sweden)
Nawalany Marek
2015-09-01
Full Text Available An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i spatial extent and geometry of hydrogeological system, (ii spatial continuity and granularity of both natural and man-made objects within the system, (iii duration of the system and (iv continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale – scale of pores, meso-scale – scale of laboratory sample, macro-scale – scale of typical blocks in numerical models of groundwater flow, local-scale – scale of an aquifer/aquitard and regional-scale – scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here. Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.
A finite-element model for moving contact line problems in immiscible two-phase flow
Kucala, Alec
2017-11-01
Accurate modeling of moving contact line (MCL) problems is imperative in predicting capillary pressure vs. saturation curves, permeability, and preferential flow paths for a variety of applications, including geological carbon storage (GCS) and enhanced oil recovery (EOR). The macroscale movement of the contact line is dependent on the molecular interactions occurring at the three-phase interface, however most MCL problems require resolution at the meso- and macro-scale. A phenomenological model must be developed to account for the microscale interactions, as resolving both the macro- and micro-scale would render most problems computationally intractable. Here, a model for the moving contact line is presented as a weak forcing term in the Navier-Stokes equation and applied directly at the location of the three-phase interface point. The moving interface is tracked with the level set method and discretized using the conformal decomposition finite element method (CDFEM), allowing for the surface tension and the wetting model to be computed at the exact interface location. A variety of verification test cases for simple two- and three-dimensional geometries are presented to validate the current MCL model, which can exhibit grid independence when a proper scaling for the slip length is chosen. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.
Solving groundwater flow problems by conjugate-gradient methods and the strongly implicit procedure
Hill, Mary C.
1990-01-01
The performance of the preconditioned conjugate-gradient method with three preconditioners is compared with the strongly implicit procedure (SIP) using a scalar computer. The preconditioners considered are the incomplete Cholesky (ICCG) and the modified incomplete Cholesky (MICCG), which require the same computer storage as SIP as programmed for a problem with a symmetric matrix, and a polynomial preconditioner (POLCG), which requires less computer storage than SIP. Although POLCG is usually used on vector computers, it is included here because of its small storage requirements. In this paper, published comparisons of the solvers are evaluated, all four solvers are compared for the first time, and new test cases are presented to provide a more complete basis by which the solvers can be judged for typical groundwater flow problems. Based on nine test cases, the following conclusions are reached: (1) SIP is actually as efficient as ICCG for some of the published, linear, two-dimensional test cases that were reportedly solved much more efficiently by ICCG; (2) SIP is more efficient than other published comparisons would indicate when common convergence criteria are used; and (3) for problems that are three-dimensional, nonlinear, or both, and for which common convergence criteria are used, SIP is often more efficient than ICCG, and is sometimes more efficient than MICCG.
Extension of CFD Codes Application to Two-Phase Flow Safety Problems - Phase 3
International Nuclear Information System (INIS)
Bestion, D.; Anglart, H.; Mahaffy, J.; Lucas, D.; Song, C.H.; Scheuerer, M.; Zigh, G.; Andreani, M.; Kasahara, F.; Heitsch, M.; Komen, E.; Moretti, F.; Morii, T.; Muehlbauer, P.; Smith, B.L.; Watanabe, T.
2014-11-01
The Writing Group 3 on the extension of CFD to two-phase flow safety problems was formed following recommendations made at the 'Exploratory Meeting of Experts to Define an Action Plan on the Application of Computational Fluid Dynamics (CFD) Codes to Nuclear Reactor Safety Problems' held in Aix-en-Provence, in May 2002. Extension of CFD codes to two-phase flow is significant potentiality for the improvement of safety investigations, by giving some access to smaller scale flow processes which were not explicitly described by present tools. Using such tools as part of a safety demonstration may bring a better understanding of physical situations, more confidence in the results, and an estimation of safety margins. The increasing computer performance allows a more extensive use of 3D modelling of two-phase Thermal hydraulics with finer nodalization. However, models are not as mature as in single phase flow and a lot of work has still to be done on the physical modelling and numerical schemes in such two-phase CFD tools. The Writing Group listed and classified the NRS problems where extension of CFD to two-phase flow may bring real benefit, and classified different modelling approaches in a first report (Bestion et al., 2006). First ideas were reported about the specification and analysis of needs in terms of validation and verification. It was then suggested to focus further activity on a limited number of NRS issues with a high priority and a reasonable chance to be successful in a reasonable period of time. The WG3-step 2 was decided with the following objectives: - selection of a limited number of NRS issues having a high priority and for which two-phase CFD has a reasonable chance to be successful in a reasonable period of time; - identification of the remaining gaps in the existing approaches using two-phase CFD for each selected NRS issue; - review of the existing data base for validation of two-phase CFD application to the selected NRS problems
Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.
2016-12-01
Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete
Directory of Open Access Journals (Sweden)
Nader Ghaffari-Nasab
2010-07-01
Full Text Available During the past two decades, there have been increasing interests on permutation flow shop with different types of objective functions such as minimizing the makespan, the weighted mean flow-time etc. The permutation flow shop is formulated as a mixed integer programming and it is classified as NP-Hard problem. Therefore, a direct solution is not available and meta-heuristic approaches need to be used to find the near-optimal solutions. In this paper, we present a new discrete firefly meta-heuristic to minimize the makespan for the permutation flow shop scheduling problem. The results of implementation of the proposed method are compared with other existing ant colony optimization technique. The preliminary results indicate that the new proposed method performs better than the ant colony for some well known benchmark problems.
International Nuclear Information System (INIS)
Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim
2014-01-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2017-11-01
This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.
Energy Technology Data Exchange (ETDEWEB)
Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)
2014-02-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.
Criteria for the reliability of numerical approximations to the solution of fluid flow problems
International Nuclear Information System (INIS)
Foias, C.
1986-01-01
The numerical approximation of the solutions of fluid flows models is a difficult problem in many cases of energy research. In all numerical methods implementable on digital computers, a basic question is if the number N of elements (Galerkin modes, finite-difference cells, finite-elements, etc.) is sufficient to describe the long time behavior of the exact solutions. It was shown using several approaches that some of the estimates based on physical intuition of N are rigorously valid under very general conditions and follow directly from the mathematical theory of the Navier-Stokes equations. Among the mathematical approaches to these estimates, the most promising (which can be and was already applied to many other dissipative partial differential systems) consists in giving upper estimates to the fractal dimension of the attractor associated to one (or all) solution(s) of the respective partial differential equations. 56 refs
High-order multi-implicit spectral deferred correction methods for problems of reactive flow
International Nuclear Information System (INIS)
Bourlioux, Anne; Layton, Anita T.; Minion, Michael L.
2003-01-01
Models for reacting flow are typically based on advection-diffusion-reaction (A-D-R) partial differential equations. Many practical cases correspond to situations where the relevant time scales associated with each of the three sub-processes can be widely different, leading to disparate time-step requirements for robust and accurate time-integration. In particular, interesting regimes in combustion correspond to systems in which diffusion and reaction are much faster processes than advection. The numerical strategy introduced in this paper is a general procedure to account for this time-scale disparity. The proposed methods are high-order multi-implicit generalizations of spectral deferred correction methods (MISDC methods), constructed for the temporal integration of A-D-R equations. Spectral deferred correction methods compute a high-order approximation to the solution of a differential equation by using a simple, low-order numerical method to solve a series of correction equations, each of which increases the order of accuracy of the approximation. The key feature of MISDC methods is their flexibility in handling several sub-processes implicitly but independently, while avoiding the splitting errors present in traditional operator-splitting methods and also allowing for different time steps for each process. The stability, accuracy, and efficiency of MISDC methods are first analyzed using a linear model problem and the results are compared to semi-implicit spectral deferred correction methods. Furthermore, numerical tests on simplified reacting flows demonstrate the expected convergence rates for MISDC methods of orders three, four, and five. The gain in efficiency by independently controlling the sub-process time steps is illustrated for nonlinear problems, where reaction and diffusion are much stiffer than advection. Although the paper focuses on this specific time-scales ordering, the generalization to any ordering combination is straightforward
Simulation of Thermal Flow Problems via a Hybrid Immersed Boundary-Lattice Boltzmann Method
Directory of Open Access Journals (Sweden)
J. Wu
2012-01-01
Full Text Available A hybrid immersed boundary-lattice Boltzmann method (IB-LBM is presented in this work to simulate the thermal flow problems. In current approach, the flow field is resolved by using our recently developed boundary condition-enforced IB-LBM (Wu and Shu, (2009. The nonslip boundary condition on the solid boundary is enforced in simulation. At the same time, to capture the temperature development, the conventional energy equation is resolved. To model the effect of immersed boundary on temperature field, the heat source term is introduced. Different from previous studies, the heat source term is set as unknown rather than predetermined. Inspired by the idea in (Wu and Shu, (2009, the unknown is calculated in such a way that the temperature at the boundary interpolated from the corrected temperature field accurately satisfies the thermal boundary condition. In addition, based on the resolved temperature correction, an efficient way to compute the local and average Nusselt numbers is also proposed in this work. As compared with traditional implementation, no approximation for temperature gradients is required. To validate the present method, the numerical simulations of forced convection are carried out. The obtained results show good agreement with data in the literature.
Some free boundary problems in potential flow regime usinga based level set method
Energy Technology Data Exchange (ETDEWEB)
Garzon, M.; Bobillo-Ares, N.; Sethian, J.A.
2008-12-09
Recent advances in the field of fluid mechanics with moving fronts are linked to the use of Level Set Methods, a versatile mathematical technique to follow free boundaries which undergo topological changes. A challenging class of problems in this context are those related to the solution of a partial differential equation posed on a moving domain, in which the boundary condition for the PDE solver has to be obtained from a partial differential equation defined on the front. This is the case of potential flow models with moving boundaries. Moreover the fluid front will possibly be carrying some material substance which will diffuse in the front and be advected by the front velocity, as for example the use of surfactants to lower surface tension. We present a Level Set based methodology to embed this partial differential equations defined on the front in a complete Eulerian framework, fully avoiding the tracking of fluid particles and its known limitations. To show the advantages of this approach in the field of Fluid Mechanics we present in this work one particular application: the numerical approximation of a potential flow model to simulate the evolution and breaking of a solitary wave propagating over a slopping bottom and compare the level set based algorithm with previous front tracking models.
Energy Technology Data Exchange (ETDEWEB)
Chang, Sung Pil [Inha University, Incheon (Korea, Republic of)
2006-04-15
This paper describes the demonstration of successful fabrication and initial characterization of micromachined pressure sensors and micromachined jets (microjets) fabricated for use in macro flow control and other applications. In this work, the microfabrication technology was investigated to create a micromachined fluidic control system with a goal of application in practical fluids problems, such as UAV (Unmanned Aerial Vehicle)-scale aerodynamic control. Approaches of this work include : (1) the development of suitable micromachined synthetic jets (microjets) as actuators, which obviate the need to physically extend micromachined structures into an external flow ; and (2) a non-silicon alternative micromachining fabrication technology based on metallic substrates and lamination (in addition to traditional MEMS technologies) which will allow the realization of larger scale, more robust structures and larger array active areas for fluidic systems. As an initial study, an array of MEMS pressure sensors and an array of MEMS modulators for orifice-based control of microjets have been fabricated, and characterized. Both pressure sensors and modulators have been built using stainless steel as a substrate and a combination of lamination and traditional micromachining processes as fabrication technologies.
International Nuclear Information System (INIS)
Chang, Sung Pil
2006-01-01
This paper describes the demonstration of successful fabrication and initial characterization of micromachined pressure sensors and micromachined jets (microjets) fabricated for use in macro flow control and other applications. In this work, the microfabrication technology was investigated to create a micromachined fluidic control system with a goal of application in practical fluids problems, such as UAV (Unmanned Aerial Vehicle)-scale aerodynamic control. Approaches of this work include : (1) the development of suitable micromachined synthetic jets (microjets) as actuators, which obviate the need to physically extend micromachined structures into an external flow ; and (2) a non-silicon alternative micromachining fabrication technology based on metallic substrates and lamination (in addition to traditional MEMS technologies) which will allow the realization of larger scale, more robust structures and larger array active areas for fluidic systems. As an initial study, an array of MEMS pressure sensors and an array of MEMS modulators for orifice-based control of microjets have been fabricated, and characterized. Both pressure sensors and modulators have been built using stainless steel as a substrate and a combination of lamination and traditional micromachining processes as fabrication technologies
Weyer, K. U.
2017-12-01
Coastal groundwater flow investigations at the Biscayne Bay, south of Miami, Florida, gave rise to the concept of density-driven flow of seawater into coastal aquifers creating a saltwater wedge. Within that wedge, convection-driven return flow of seawater and a dispersion zone were assumed by Cooper et al. (1964) to be the cause of the Biscayne aquifer `sea water wedge'. This conclusion was based on the chloride distribution within the aquifer and on an analytical model concept assuming convection flow within a confined aquifer without taking non-chemical field data into consideration. This concept was later labelled the `Henry Problem', which any numerical variable density flow program must be able to simulate to be considered acceptable. Both, `density-driven flow' and Tothian `groundwater flow systems' (with or without variable density conditions) are driven by gravitation. The difference between the two are the boundary conditions. 'Density-driven flow' occurs under hydrostatic boundary conditions while Tothian `groundwater flow systems' occur under hydrodynamic boundary conditions. Revisiting the Cooper et al. (1964) publication with its record of piezometric field data (heads) showed that the so-called sea water wedge has been caused by discharging deep saline groundwater driven by gravitational flow and not by denser sea water. Density driven flow of seawater into the aquifer was not found reflected in the head measurements for low and high tide conditions which had been taken contemporaneously with the chloride measurements. These head measurements had not been included in the flow interpretation. The very same head measurements indicated a clear dividing line between shallow local fresh groundwater flow and saline deep groundwater flow without the existence of a dispersion zone or a convection cell. The Biscayne situation emphasizes the need for any chemical interpretation of flow pattern to be supported by head data as energy indicators of flow fields
Directory of Open Access Journals (Sweden)
Aang Nuryaman
2012-11-01
Full Text Available The governing equations describing the methane oxidation process in reverse flow reactor are given by a set of convective-diffusion equations with a nonlinear reaction term, where temperature and methane conversion are dependent variables. In this study, the process is assumed to be one-dimensional pseudo homogeneous model and takes place with a certain reaction rate in which the whole process of reactor is still workable. Thus, the reaction rate can proceed at a fixed temperature. Under this condition, we restrict ourselves to solve the equations for the conversion only. From the available data, it turns out that the ratio of the diffusion term to the reaction term is small. Hence, this ratio is considered as small parameter in our model and this leads to a singular perturbation problem. In the vicinity of small parameter in front of higher order term, the numerical difficulties will be found. Here, we present an analytical solution by means of matched asymptotic expansions. Result shows that, up to and including the first order of approximation, the solution is in agreement with the exact and numerical solutions of the boundary value problem.
Analytical solution to the circularity problem in the discounted cash flow valuation framework
Directory of Open Access Journals (Sweden)
Felipe Mejía-Peláez
2011-12-01
Full Text Available In this paper we propose an analytical solution to the circularity problem between value and cost of capital. Our solution is derived starting from a central principle of finance that relates value today to value, cash flow, and the discount rate for next period. We present a general formulation without circularity for the equity value (E, cost of levered equity (Ke, levered firm value (V, and the weighted average cost of capital (WACC. We furthermore compare the results obtained from these formulas with the results of the application of the Adjusted Present Value approach (no circularity and the iterative solution of circularity based upon the iteration feature of a spreadsheet, concluding that all methods yield exactly the same answer. The advantage of this solution is that it avoids problems such as using manual methods (i.e., the popular “Rolling WACC” ignoring the circularity issue, setting a target leverage (usually constant with the inconsistencies that result from it, the wrong use of book values, or attributing the discrepancies in values to rounding errors.
The use of wavelet transforms in the solution of two-phase flow problems
International Nuclear Information System (INIS)
Moridis, G.J.; Nikolaou, M.; You, Yong
1994-10-01
In this paper we present the use of wavelets to solve the nonlinear Partial Differential.Equation (PDE) of two-phase flow in one dimension. The wavelet transforms allow a drastically different approach in the discretization of space. In contrast to the traditional trigonometric basis functions, wavelets approximate a function not by cancellation but by placement of wavelets at appropriate locations. When an abrupt chance, such as a shock wave or a spike, occurs in a function, only local coefficients in a wavelet approximation will be affected. The unique feature of wavelets is their Multi-Resolution Analysis (MRA) property, which allows seamless investigational any spatial resolution. The use of wavelets is tested in the solution of the one-dimensional Buckley-Leverett problem against analytical solutions and solutions obtained from standard numerical models. Two classes of wavelet bases (Daubechies and Chui-Wang) and two methods (Galerkin and collocation) are investigated. We determine that the Chui-Wang, wavelets and a collocation method provide the optimum wavelet solution for this type of problem. Increasing the resolution level improves the accuracy of the solution, but the order of the basis function seems to be far less important. Our results indicate that wavelet transforms are an effective and accurate method which does not suffer from oscillations or numerical smearing in the presence of steep fronts
Energy Technology Data Exchange (ETDEWEB)
Jacob Raglend, I. [School of Electrical Sciences, Noorul Islam University, Kumaracoil 629 180 (India); Veeravalli, Sowjanya; Sailaja, Kasanur; Sudheera, B. [School of Electrical Sciences, Vellore Institute of Technology, Vellore 632 004 (India); Kothari, D.P. [FNAE, FNASC, SMIEEE, Vellore Institute of Technology University, Vellore 632 014 (India)
2010-07-15
A comparative study has been made on the solutions obtained using combined economic emission dispatch (CEED) problem considering line flow constraints using different intelligent techniques for the regulated power system to ensure a practical, economical and secure generation schedule. The objective of the paper is to minimize the total production cost of the power generation. Economic load dispatch (ELD) and economic emission dispatch (EED) have been applied to obtain optimal fuel cost of generating units. Combined economic emission dispatch (CEED) is obtained by considering both the economic and emission objectives. This bi-objective CEED problem is converted into single objective function using price penalty factor approach. In this paper, intelligent techniques such as genetic algorithm (GA), evolutionary programming (EP), particle swarm optimization (PSO), differential evolution (DE) are applied to obtain CEED solutions for the IEEE 30-bus system and 15-unit system. This proposed algorithm introduces an efficient CEED approach that obtains the minimum operating cost satisfying unit, emission and network constraints. The proposed algorithm has been tested on two sample systems viz the IEEE 30-bus system and a 15-unit system. The results obtained by the various artificial intelligent techniques are compared with respect to the solution time, total production cost and convergence criteria. The solutions obtained are quite encouraging and useful in the economic emission environment. The algorithm and simulation are carried out using Matlab software. (author)
International Nuclear Information System (INIS)
Jacob Raglend, I.; Veeravalli, Sowjanya; Sailaja, Kasanur; Sudheera, B.; Kothari, D.P.
2010-01-01
A comparative study has been made on the solutions obtained using combined economic emission dispatch (CEED) problem considering line flow constraints using different intelligent techniques for the regulated power system to ensure a practical, economical and secure generation schedule. The objective of the paper is to minimize the total production cost of the power generation. Economic load dispatch (ELD) and economic emission dispatch (EED) have been applied to obtain optimal fuel cost of generating units. Combined economic emission dispatch (CEED) is obtained by considering both the economic and emission objectives. This bi-objective CEED problem is converted into single objective function using price penalty factor approach. In this paper, intelligent techniques such as genetic algorithm (GA), evolutionary programming (EP), particle swarm optimization (PSO), differential evolution (DE) are applied to obtain CEED solutions for the IEEE 30-bus system and 15-unit system. This proposed algorithm introduces an efficient CEED approach that obtains the minimum operating cost satisfying unit, emission and network constraints. The proposed algorithm has been tested on two sample systems viz the IEEE 30-bus system and a 15-unit system. The results obtained by the various artificial intelligent techniques are compared with respect to the solution time, total production cost and convergence criteria. The solutions obtained are quite encouraging and useful in the economic emission environment. The algorithm and simulation are carried out using Matlab software. (author)
On the modelling of compressible inviscid flow problems using AUSM schemes
Directory of Open Access Journals (Sweden)
Hajžman M.
2007-11-01
Full Text Available During last decades, upwind schemes have become a popular method in the field of computational fluid dynamics. Although they are only first order accurate, AUSM (Advection Upstream Splitting Method schemes proved to be well suited for modelling of compressible flows due to their robustness and ability of capturing shock discontinuities. In this paper, we review the composition of the AUSM flux-vector splitting scheme and its improved version noted AUSM+, proposed by Liou, for the solution of the Euler equations. Mach number splitting functions operating with values from adjacent cells are used to determine numerical convective fluxes and pressure splitting is used for the evaluation of numerical pressure fluxes. Both versions of the AUSM scheme are applied for solving some test problems such as one-dimensional shock tube problem and three dimensional GAMM channel. Features of the schemes are discussed in comparison with some explicit central schemes of the first order accuracy (Lax-Friedrichs and of the second order accuracy (MacCormack.
A hybrid flow shop model for an ice cream production scheduling problem
Directory of Open Access Journals (Sweden)
Imma Ribas Vila
2009-07-01
Full Text Available Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Taula normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} In this paper we address the scheduling problem that comes from an ice cream manufacturing company. This production system can be modelled as a three stage nowait hybrid flow shop with batch dependent setup costs. To contribute reducing the gap between theory and practice we have considered the real constraints and the criteria used by planners. The problem considered has been formulated as a mixed integer programming. Further, two competitive heuristic procedures have been developed and one of them will be proposed to schedule in the ice cream factory.
Directory of Open Access Journals (Sweden)
Liling Sun
2015-01-01
Full Text Available An improved multiobjective ABC algorithm based on K-means clustering, called CMOABC, is proposed. To fasten the convergence rate of the canonical MOABC, the way of information communication in the employed bees’ phase is modified. For keeping the population diversity, the multiswarm technology based on K-means clustering is employed to decompose the population into many clusters. Due to each subcomponent evolving separately, after every specific iteration, the population will be reclustered to facilitate information exchange among different clusters. Application of the new CMOABC on several multiobjective benchmark functions shows a marked improvement in performance over the fast nondominated sorting genetic algorithm (NSGA-II, the multiobjective particle swarm optimizer (MOPSO, and the multiobjective ABC (MOABC. Finally, the CMOABC is applied to solve the real-world optimal power flow (OPF problem that considers the cost, loss, and emission impacts as the objective functions. The 30-bus IEEE test system is presented to illustrate the application of the proposed algorithm. The simulation results demonstrate that, compared to NSGA-II, MOPSO, and MOABC, the proposed CMOABC is superior for solving OPF problem, in terms of optimization accuracy.
Salomone, Horacio D.; Olivieri, Néstor A.; Véliz, Maximiliano E.; Raviola, Lisandro A.
2018-05-01
In the context of fluid mechanics courses, it is customary to consider the problem of a sphere falling under the action of gravity inside a viscous fluid. Under suitable assumptions, this phenomenon can be modelled using Stokes’ law and is routinely reproduced in teaching laboratories to determine terminal velocities and fluid viscosities. In many cases, however, the measured physical quantities show important deviations with respect to the predictions deduced from the simple Stokes’ model, and the causes of these apparent ‘anomalies’ (for example, whether the flow is laminar or turbulent) are seldom discussed in the classroom. On the other hand, there are various variable-mass problems that students tackle during elementary mechanics courses and which are discussed in many textbooks. In this work, we combine both kinds of problems and analyse—both theoretically and experimentally—the evolution of a system composed of a sphere pulled by a chain of variable length inside a tube filled with water. We investigate the effects of different forces acting on the system such as weight, buoyancy, viscous friction and drag force. By means of a sequence of mathematical models of increasing complexity, we obtain a progressive fit that accounts for the experimental data. The contrast between the various models exposes the strengths and weaknessess of each one. The proposed experience can be useful for integrating concepts of elementary mechanics and fluids, and is suitable as laboratory practice, stressing the importance of the experimental validation of theoretical models and showing the model-building processes in a didactic framework.
Santosa, B.; Siswanto, N.; Fiqihesa
2018-04-01
This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution
International Nuclear Information System (INIS)
Karpp, R.R.
1980-10-01
This report treats analytically the problem of the symmetric impact of two compressible fluid streams. The flow is assumed to be steady, plane, inviscid, and subsonic and that the compressible fluid is of the Chaplygin (tangent gas) type. In the analysis, the governing equations are first transformed to the hodograph plane where an exact, closed-form solution is obtained by standard techniques. The distributions of fluid properties along the plane of symmetry as well as the shapes of the boundary streamlines are exactly determined by transforming the solution back to the physical plane. The problem of a compressible fluid jet penetrating into an infinite target of similar material is also exactly solved by considering a limiting case of this solution. This new compressible flow solution reduces to the classical result of incompressible flow theory when the sound speed of the fluid is allowed to approach infinity. Several illustrations of the differences between compressible and incompressible flows of the type considered are presented
Use of a genetic algorithm to solve two-fluid flow problems on an NCUBE multiprocessor computer
International Nuclear Information System (INIS)
Pryor, R.J.; Cline, D.D.
1992-01-01
A method of solving the two-phase fluid flow equations using a genetic algorithm on a NCUBE multiprocessor computer is presented. The topics discussed are the two-phase flow equations, the genetic representation of the unknowns, the fitness function, the genetic operators, and the implementation of the algorithm on the NCUBE computer. The efficiency of the implementation is investigated using a pipe blowdown problem. Effects of varying the genetic parameters and the number of processors are presented
Use of a genetic agorithm to solve two-fluid flow problems on an NCUBE multiprocessor computer
International Nuclear Information System (INIS)
Pryor, R.J.; Cline, D.D.
1993-01-01
A method of solving the two-phases fluid flow equations using a genetic algorithm on a NCUBE multiprocessor computer is presented. The topics discussed are the two-phase flow equations, the genetic representation of the unkowns, the fitness function, the genetic operators, and the implementation of the algorithm on the NCUBE computer. The efficiency of the implementation is investigated using a pipe blowdown problem. Effects of varying the genetic parameters and the number of processors are presented. (orig.)
International Nuclear Information System (INIS)
Ponman, T.J.
1984-01-01
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1988-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1989-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
A modified teaching–learning based optimization for multi-objective optimal power flow problem
International Nuclear Information System (INIS)
Shabanpour-Haghighi, Amin; Seifi, Ali Reza; Niknam, Taher
2014-01-01
Highlights: • A new modified teaching–learning based algorithm is proposed. • A self-adaptive wavelet mutation strategy is used to enhance the performance. • To avoid reaching a large repository size, a fuzzy clustering technique is used. • An efficiently smart population selection is utilized. • Simulations show the superiority of this algorithm compared with other ones. - Abstract: In this paper, a modified teaching–learning based optimization algorithm is analyzed to solve the multi-objective optimal power flow problem considering the total fuel cost and total emission of the units. The modified phase of the optimization algorithm utilizes a self-adapting wavelet mutation strategy. Moreover, a fuzzy clustering technique is proposed to avoid extremely large repository size besides a smart population selection for the next iteration. These techniques make the algorithm searching a larger space to find the optimal solutions while speed of the convergence remains good. The IEEE 30-Bus and 57-Bus systems are used to illustrate performance of the proposed algorithm and results are compared with those in literatures. It is verified that the proposed approach has better performance over other techniques
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2018-05-01
This article presents an effective estimation of distribution algorithm, named P-EDA, to solve the blocking flow-shop scheduling problem (BFSP) with the makespan criterion. In the P-EDA, a Nawaz-Enscore-Ham (NEH)-based heuristic and the random method are combined to generate the initial population. Based on several superior individuals provided by a modified linear rank selection, a probabilistic model is constructed to describe the probabilistic distribution of the promising solution space. The path relinking technique is incorporated into EDA to avoid blindness of the search and improve the convergence property. A modified referenced local search is designed to enhance the local exploitation. Moreover, a diversity-maintaining scheme is introduced into EDA to avoid deterioration of the population. Finally, the parameters of the proposed P-EDA are calibrated using a design of experiments approach. Simulation results and comparisons with some well-performing algorithms demonstrate the effectiveness of the P-EDA for solving BFSP.
Parallel genetic algorithms with migration for the hybrid flow shop scheduling problem
Directory of Open Access Journals (Sweden)
K. Belkadi
2006-01-01
Full Text Available This paper addresses scheduling problems in hybrid flow shop-like systems with a migration parallel genetic algorithm (PGA_MIG. This parallel genetic algorithm model allows genetic diversity by the application of selection and reproduction mechanisms nearer to nature. The space structure of the population is modified by dividing it into disjoined subpopulations. From time to time, individuals are exchanged between the different subpopulations (migration. Influence of parameters and dedicated strategies are studied. These parameters are the number of independent subpopulations, the interconnection topology between subpopulations, the choice/replacement strategy of the migrant individuals, and the migration frequency. A comparison between the sequential and parallel version of genetic algorithm (GA is provided. This comparison relates to the quality of the solution and the execution time of the two versions. The efficiency of the parallel model highly depends on the parameters and especially on the migration frequency. In the same way this parallel model gives a significant improvement of computational time if it is implemented on a parallel architecture which offers an acceptable number of processors (as many processors as subpopulations.
Directory of Open Access Journals (Sweden)
Xiuli Wu
2018-03-01
Full Text Available Renewable energy is an alternative to non-renewable energy to reduce the carbon footprint of manufacturing systems. Finding out how to make an alternative energy-efficient scheduling solution when renewable and non-renewable energy drives production is of great importance. In this paper, a multi-objective flexible flow shop scheduling problem that considers variable processing time due to renewable energy (MFFSP-VPTRE is studied. First, the optimization model of the MFFSP-VPTRE is formulated considering the periodicity of renewable energy and the limitations of energy storage capacity. Then, a hybrid non-dominated sorting genetic algorithm with variable local search (HNSGA-II is proposed to solve the MFFSP-VPTRE. An operation and machine-based encoding method is employed. A low-carbon scheduling algorithm is presented. Besides the crossover and mutation, a variable local search is used to improve the offspring’s Pareto set. The offspring and the parents are combined and those that dominate more are selected to continue evolving. Finally, two groups of experiments are carried out. The results show that the low-carbon scheduling algorithm can effectively reduce the carbon footprint under the premise of makespan optimization and the HNSGA-II outperforms the traditional NSGA-II and can solve the MFFSP-VPTRE effectively and efficiently.
Elsheikh, Ahmed H.
2014-02-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.
ANISOTROPIC THERMAL CONDUCTION AND THE COOLING FLOW PROBLEM IN GALAXY CLUSTERS
International Nuclear Information System (INIS)
Parrish, Ian J.; Sharma, Prateek; Quataert, Eliot
2009-01-01
We examine the long-standing cooling flow problem in galaxy clusters with three-dimensional magnetohydrodynamics simulations of isolated clusters including radiative cooling and anisotropic thermal conduction along magnetic field lines. The central regions of the intracluster medium (ICM) can have cooling timescales of ∼200 Myr or shorter-in order to prevent a cooling catastrophe the ICM must be heated by some mechanism such as active galactic nucleus feedback or thermal conduction from the thermal reservoir at large radii. The cores of galaxy clusters are linearly unstable to the heat-flux-driven buoyancy instability (HBI), which significantly changes the thermodynamics of the cluster core. The HBI is a convective, buoyancy-driven instability that rearranges the magnetic field to be preferentially perpendicular to the temperature gradient. For a wide range of parameters, our simulations demonstrate that in the presence of the HBI, the effective radial thermal conductivity is reduced to ∼<10% of the full Spitzer conductivity. With this suppression of conductive heating, the cooling catastrophe occurs on a timescale comparable to the central cooling time of the cluster. Thermal conduction alone is thus unlikely to stabilize clusters with low central entropies and short central cooling timescales. High central entropy clusters have sufficiently long cooling times that conduction can help stave off the cooling catastrophe for cosmologically interesting timescales.
Traffic Management as a Service: The Traffic Flow Pattern Classification Problem
Directory of Open Access Journals (Sweden)
Carlos T. Calafate
2015-01-01
Full Text Available Intelligent Transportation System (ITS technologies can be implemented to reduce both fuel consumption and the associated emission of greenhouse gases. However, such systems require intelligent and effective route planning solutions to reduce travel time and promote stable traveling speeds. To achieve such goal these systems should account for both estimated and real-time traffic congestion states, but obtaining reliable traffic congestion estimations for all the streets/avenues in a city for the different times of the day, for every day in a year, is a complex task. Modeling such a tremendous amount of data can be time-consuming and, additionally, centralized computation of optimal routes based on such time-dependencies has very high data processing requirements. In this paper we approach this problem through a heuristic to considerably reduce the modeling effort while maintaining the benefits of time-dependent traffic congestion modeling. In particular, we propose grouping streets by taking into account real traces describing the daily traffic pattern. The effectiveness of this heuristic is assessed for the city of Valencia, Spain, and the results obtained show that it is possible to reduce the required number of daily traffic flow patterns by a factor of 4210 while maintaining the essence of time-dependent modeling requirements.
2014-05-01
we however focus on the continuum regime, where we can use the governing equations like Euler equations or Navier Stokes equations. The flow chemistry can...assumption. Instead the flow is considered to be a mixture of ideal gases, and the flow chemistry accounts for production and destruction of all the species. A
Yousfi, Ammar; Mechergui, Mohammed
2016-04-01
al. (2001). In this work, a novel solution based on theoretical approach will be adapted to incorporate both the seepage face and the unsaturated zone flow contribution for solving ditch drained aquifers problems. This problem will be tackled on the basis of the approximate 2D solution given by Castro-Orgaz et al. (2012). This given solution yields the generalized water table profile function with a suitable boundary condition to be determined and provides a modified DF theory which permits as an outcome the analytical determination of the seepage face. To assess the ability of the developed equation for water-table estimations, the obtained results were compared with numerical solutions to the 2-D problem under different conditions. It is shown that results are in fair agreement and thus the resulting model can be used for designing ditch drainage systems. With respect to drainage design, the spacings calculated with the newly derived equation are compared with those computed from the DF theory. It is shown that the effect of the unsaturated zone flow contribution is limited to sandy soils and The calculated maximum increase in drain spacing is about 30%. Keywords: subsurface ditch drainage; unsaturated zone; seepage face; water-table, ditch spacing equation
International Nuclear Information System (INIS)
Toumi, I.
1990-04-01
This thesis is devoted to the study of the Riemann problem and the construction of Godunov type numerical schemes for one or two dimensional two-phase flow models. In the first part, we study the Riemann problem for the well-known Drift-Flux, model which has been widely used for the analysis of thermal hydraulics transients. Then we use this study to construct approximate Riemann solvers and we describe the corresponding Godunov type schemes for simplified equation of state. For computation of complex two-phase flows, a weak formulation of Roe's approximate Riemann solver, which gives a method to construct a Roe-averaged jacobian matrix with a general equation of state, is proposed. For two-dimensional flows, the developed methods are based upon an approximate solver for a two-dimensional Riemann problem, according to Harten-Lax-Van Leer principles. The numerical results for standard test problems show the good behaviour of these numerical schemes for a wide range of flow conditions [fr
Czech Academy of Sciences Publication Activity Database
Domesová, Simona; Beres, Michal
2017-01-01
Roč. 15, č. 2 (2017), s. 258-266 ISSN 1336-1376 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : Bayesian statistics * Cross-Entropy method * Darcy flow * Gaussian random field * inverse problem Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://advances.utc.sk/index.php/AEEE/article/view/2236
Directory of Open Access Journals (Sweden)
Hsin-Ho Liu
Full Text Available The aim of this study was to determine the subsequent risk of acute urine retention and prostate surgery in patients receiving alpha-1 blockers treatment and having a maximum urinary flow rate of less than 15ml/sec.We identified patients who were diagnosed with benign prostate hyperplasia (BPH and had a maximum uroflow rate of less than 15ml/sec between 1 January, 2002 to 31 December, 2011 from Taiwan's National Health Insurance Research Database into study group (n = 303. The control cohort included four BPH/LUTS patients without 5ARI used for each study group, randomly selected from the same dataset (n = 1,212. Each patient was monitored to identify those who subsequently developed prostate surgery and acute urine retention.Prostate surgery and acute urine retention are detected in 5.9% of control group and 8.3% of study group during 10-year follow up. Compared with the control group, there was increase in the risk of prostate surgery and acute urine retention in the study group (HR = 1.83, 95% CI: 1.16 to 2.91 after adjusting for age, comorbidities, geographic region and socioeconomic status.Maximum urine flow rate of less than 15ml/sec is a risk factor of urinary retention and subsequent prostate surgery in BPH patients receiving alpha-1 blocker therapy. This result can provide a reference for clinicians.
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.
Directory of Open Access Journals (Sweden)
Tao Ren
2014-01-01
Full Text Available We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.
Multilevel markov chain monte carlo method for high-contrast single-phase flow problems
Efendiev, Yalchin R.
2014-12-19
In this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates. © Global Science Press Limited 2015.
Martirosyan, A. N.; Davtyan, A. V.; Dinunts, A. S.; Martirosyan, H. A.
2018-04-01
The purpose of this article is to investigate a problem of closing cracks by building up a layer of sediments on surfaces of a crack in an infinite thermoelastic medium in the presence of a flow of fluids with impurities. The statement of the problem of closing geophysical cracks in the presence of a fluid flow is presented with regard to the thermoelastic stress and the influence of the impurity deposition in the liquid on the crack surfaces due to thermal diffusion at the fracture closure. The Wiener–Hopf method yields an analytical solution in the special case without friction. Numerical calculations are performed in this case and the dependence of the crack closure time on the coordinate is plotted. A similar spatial problem is also solved. These results generalize the results of previous studies of geophysical cracks and debris in rocks, where the closure of a crack due to temperature effects is studied without taking the elastic stresses into account.
National Research Council Canada - National Science Library
Allen, Mark
2000-01-01
.... Our approaches include: (1) the development of suitable micromachined synthetic jets (microjets) as actuators, which obviate the need to physically extend micromachined structures into an external flow...
Two-dimensional maximum entropy image restoration
International Nuclear Information System (INIS)
Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.
1977-07-01
An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures
Numerical simulation for a two-phase porous medium flow problem with rate independent hysteresis
Brokate, M.; Botkin, N.D.; Pykhteev, O.A.
2012-01-01
The paper is devoted to the numerical simulation of a multiphase flow in porous medium with a hysteretic relation between the capillary pressures and the saturations of the phases. The flow model we use is based on Darcys law. The hysteretic
DEFF Research Database (Denmark)
Si, Haiqing; Shen, Wen Zhong; Zhu, Wei Jun
2013-01-01
Acoustic propagation in the presence of a non-uniform mean flow is studied numerically by using two different acoustic propagating models, which solve linearized Euler equations (LEE) and acoustic perturbation equations (APE). As noise induced by turbulent flows often propagates from near field t...
Weisenberg, J.; Pico, T.; Birch, L.; Mitrovica, J. X.
2017-12-01
The history of the Laurentide Ice Sheet since the Last Glacial Maximum ( 26 ka; LGM) is constrained by geological evidence of ice margin retreat in addition to relative sea-level (RSL) records in both the near and far field. Nonetheless, few observations exist constraining the ice sheet's extent across the glacial build-up phase preceding the LGM. Recent work correcting RSL records along the U.S. mid-Atlantic dated to mid-MIS 3 (50-35 ka) for glacial-isostatic adjustment (GIA) infer that the Laurentide Ice Sheet grew by more than three-fold in the 15 ky leading into the LGM. Here we test the plausibility of a late and extremely rapid glaciation by driving a high-resolution ice sheet model, based on a nonlinear diffusion equation for the ice thickness. We initialize this model at 44 ka with the mid-MIS 3 ice sheet configuration proposed by Pico et al. (2017), GIA-corrected basal topography, and mass balance representative of mid-MIS 3 conditions. These simulations predict rapid growth of the eastern Laurentide Ice Sheet, with rates consistent with achieving LGM ice volumes within 15 ky. We use these simulations to refine the initial ice configuration and present an improved and higher resolution model for North American ice cover during mid-MIS 3. In addition we show that assumptions of ice loads during the glacial phase, and the associated reconstructions of GIA-corrected basal topography, produce a bias that can underpredict ice growth rates in the late stages of the glaciation, which has important consequences for our understanding of the speed limit for ice growth on glacial timescales.
International Nuclear Information System (INIS)
Rahmani, R.
1983-01-01
The nucleate boiling heat-transfer coefficient and the maximum heat flux were studied experimentally as functions of velocity, quality and heater diameter for single-phase flow, and two-phase flow of Freon-113 (trichlorotrifluorethane). Results show: (1) peak heat flux: over 300 measured peak heat flux data from two 0.875-in. and four 0.625-in.-diameter heaters indicated that: (a) for pool boiling, single-phase and two-phase forced convection boiling the only parameter (among hysteresis, rate of power increase, aging, presence and proximity of unheated rods) that has a statistically significant effect on the peak heat flux is the velocity. (b) In the velocity range (0 0 position or the point of impact of the incident fluid) and the top (180 0 position) of the test element, respectively
International Nuclear Information System (INIS)
Chitanvis, S.M.
1994-01-01
We have designed a flow tube reactor for supercritical water oxidation of wastes that confines the oxidation reaction to the vicinity of the axis of the tube. This prevents high temperatures and reactants as well as reaction products from coming in intimate contact with reactor walls. This implies a lessening of corrosion of the walls of the reactor. We display numerical simulations for a vertical reactor with conservative design parameters that illustrate our concept. We performed our calculations for the destruction of sodium nitrate by ammonium hydroxide In the presence of supercritical water, where the production of sodium hydroxide causes corrosion. We have compared these results with that for a horizontal set-up where the sodium hydroxide created during the reaction ends up on the floor of the tube, implying a higher probability of corrosion
Bagchi, Prosenjit
2016-11-01
In this talk, two problems in multiphase biological flows will be discussed. The first is the direct numerical simulation of whole blood and drug particulates in microvascular networks. Blood in microcirculation behaves as a dense suspension of heterogeneous cells. The erythrocytes are extremely deformable, while inactivated platelets and leukocytes are nearly rigid. A significant progress has been made in recent years in modeling blood as a dense cellular suspension. However, many of these studies considered the blood flow in simple geometry, e.g., straight tubes of uniform cross-section. In contrast, the architecture of a microvascular network is very complex with bifurcating, merging and winding vessels, posing a further challenge to numerical modeling. We have developed an immersed-boundary-based method that can consider blood cell flow in physiologically realistic and complex microvascular network. In addition to addressing many physiological issues related to network hemodynamics, this tool can be used to optimize the transport properties of drug particulates for effective organ-specific delivery. Our second problem is pseudopod-driven motility as often observed in metastatic cancer cells and other amoeboid cells. We have developed a multiscale hydrodynamic model to simulate such motility. We study the effect of cell stiffness on motility as the former has been considered as a biomarker for metastatic potential. Funded by the National Science Foundation.
On a boundary layer problem related to the gas flow in shales
Barenblatt, G. I.; Monteiro, P. J. M.; Rycroft, C. H.
2013-01-01
The development of gas deposits in shales has become a significant energy resource. Despite the already active exploitation of such deposits, a mathematical model for gas flow in shales does not exist. Such a model is crucial for optimizing
Directory of Open Access Journals (Sweden)
Hozejowski Leszek
2012-04-01
Full Text Available The paper is devoted to a computational problem of predicting a local heat transfer coefficient from experimental temperature data. The experimental part refers to boiling flow of a refrigerant in a minichannel. Heat is dissipated from heating alloy to the flowing liquid due to forced convection. The mathematical model of the problem consists of the governing Poisson equation and the proper boundary conditions. For accurate results it is required to smooth the measurements which was obtained by using Trefftz functions. The measurements were approximated with a linear combination of Trefftz functions. Due to the computational procedure in which the measurement errors are known, it was possible to smooth the data and also to reduce the residuals of approximation on the boundaries.
International Nuclear Information System (INIS)
Shin, Y.W.; Wiedermann, A.H.
1984-02-01
A method was published, based on the integral method of characteristics, by which the junction and boundary conditions needed in computation of a flow in a piping network can be accurately formulated. The method for the junction and boundary conditions formulation together with the two-step Lax-Wendroff scheme are used in a computer program; the program in turn, is used here in calculating sample problems related to the blowdown transient of a two-phase flow in the piping network downstream of a PWR pressurizer. Independent, nearly exact analytical solutions also are obtained for the sample problems. Comparison of the results obtained by the hybrid numerical technique with the analytical solutions showed generally good agreement. The good numerical accuracy shown by the results of our scheme suggest that the hybrid numerical technique is suitable for both benchmark and design calculations of PWR pressurizer blowdown transients
Riemann–Hilbert problem approach for two-dimensional flow inverse scattering
Energy Technology Data Exchange (ETDEWEB)
Agaltsov, A. D., E-mail: agalets@gmail.com [Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, 119991 Moscow (Russian Federation); Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr [CNRS (UMR 7641), Centre de Mathématiques Appliquées, Ecole Polytechnique, 91128 Palaiseau (France); IEPT RAS, 117997 Moscow (Russian Federation); Moscow Institute of Physics and Technology, Dolgoprudny (Russian Federation)
2014-10-15
We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.
Riemann–Hilbert problem approach for two-dimensional flow inverse scattering
International Nuclear Information System (INIS)
Agaltsov, A. D.; Novikov, R. G.
2014-01-01
We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given
DEFF Research Database (Denmark)
Karsten, Christian Vad; Pisinger, David; Røpke, Stefan
2015-01-01
-commodity network flow problem with transit time constraints which puts limits on the duration of the transit of the commodities through the network. It is shown that for the particular application it does not increase the solution time to include the transit time constraints and that including the transit time...... is essential to offer customers a competitive product. © 2015 Elsevier Ltd. All rights reserved....
Directory of Open Access Journals (Sweden)
Laxmi A. Bewoor
2017-10-01
Full Text Available The no-wait flow shop is a flowshop in which the scheduling of jobs is continuous and simultaneous through all machines without waiting for any consecutive machines. The scheduling of a no-wait flow shop requires finding an appropriate sequence of jobs for scheduling, which in turn reduces total processing time. The classical brute force method for finding the probabilities of scheduling for improving the utilization of resources may become trapped in local optima, and this problem can hence be observed as a typical NP-hard combinatorial optimization problem that requires finding a near optimal solution with heuristic and metaheuristic techniques. This paper proposes an effective hybrid Particle Swarm Optimization (PSO metaheuristic algorithm for solving no-wait flow shop scheduling problems with the objective of minimizing the total flow time of jobs. This Proposed Hybrid Particle Swarm Optimization (PHPSO algorithm presents a solution by the random key representation rule for converting the continuous position information values of particles to a discrete job permutation. The proposed algorithm initializes population efficiently with the Nawaz-Enscore-Ham (NEH heuristic technique and uses an evolutionary search guided by the mechanism of PSO, as well as simulated annealing based on a local neighborhood search to avoid getting stuck in local optima and to provide the appropriate balance of global exploration and local exploitation. Extensive computational experiments are carried out based on Taillard’s benchmark suite. Computational results and comparisons with existing metaheuristics show that the PHPSO algorithm outperforms the existing methods in terms of quality search and robustness for the problem considered. The improvement in solution quality is confirmed by statistical tests of significance.
Directory of Open Access Journals (Sweden)
Xanming Wang
1996-01-01
Full Text Available A technique is developed for evaluation of eigenvalues in solution of the differential equation d2y/dr2+(1/rdy/dr+λ2(β−r2y=0 which occurs in the problem of heat convection in laminar flow through a circular tube with silp-flow (β>1. A series solution requires the expansions of coeffecients involving extremely large numbers. No work has been reported in the case of β>1, because of its computational complexity in the evaluation of the eigenvalues. In this paper, a matrix was constructed and a computational algorithm was obtained to calculate the first four eigenvalues. Also, an asymptotic formula was developed to generate the full spectrum of eigenvalues. The computational results for various values of β were obtained.
Salama, Amgad
2014-09-01
In this work we apply the experimenting pressure field approach to the numerical solution of the single phase flow problem in anisotropic porous media using the multipoint flux approximation. We apply this method to the problem of flow in saturated anisotropic porous media. In anisotropic media the component flux representation requires, generally multiple pressure values in neighboring cells (e.g., six pressure values of the neighboring cells is required in two-dimensional rectangular meshes). This apparently results in the need for a nine points stencil for the discretized pressure equation (27 points stencil in three-dimensional rectangular mesh). The coefficients associated with the discretized pressure equation are complex and require longer expressions which make their implementation prone to errors. In the experimenting pressure field technique, the matrix of coefficients is generated automatically within the solver. A set of predefined pressure fields is operated on the domain through which the velocity field is obtained. Apparently such velocity fields do not satisfy the mass conservation equations entailed by the source/sink term and boundary conditions from which the residual is calculated. In this method the experimenting pressure fields are designed such that the residual reduces to the coefficients of the pressure equation matrix. © 2014 Elsevier B.V. All rights reserved.
Salama, Amgad; Sun, Shuyu; Wheeler, Mary Fanett
2014-01-01
In this work we apply the experimenting pressure field approach to the numerical solution of the single phase flow problem in anisotropic porous media using the multipoint flux approximation. We apply this method to the problem of flow in saturated anisotropic porous media. In anisotropic media the component flux representation requires, generally multiple pressure values in neighboring cells (e.g., six pressure values of the neighboring cells is required in two-dimensional rectangular meshes). This apparently results in the need for a nine points stencil for the discretized pressure equation (27 points stencil in three-dimensional rectangular mesh). The coefficients associated with the discretized pressure equation are complex and require longer expressions which make their implementation prone to errors. In the experimenting pressure field technique, the matrix of coefficients is generated automatically within the solver. A set of predefined pressure fields is operated on the domain through which the velocity field is obtained. Apparently such velocity fields do not satisfy the mass conservation equations entailed by the source/sink term and boundary conditions from which the residual is calculated. In this method the experimenting pressure fields are designed such that the residual reduces to the coefficients of the pressure equation matrix. © 2014 Elsevier B.V. All rights reserved.
Improvement of DC Optimal Power Flow Problem Based on Nodal Approximation of Transmission Losses
Directory of Open Access Journals (Sweden)
M. R. Baghayipour
2012-03-01
3-\tIts formulation is simple and easy to understand. Moreover, it can simply be realized in the form of Lagrange representation, makes it possible to be considered as some constraints in the body of any bi-level optimization problem, with its internal level including the OPF problem satisfaction.
Numerical simulation for a two-phase porous medium flow problem with rate independent hysteresis
International Nuclear Information System (INIS)
Brokate, M.; Botkin, N.D.; Pykhteev, O.A.
2012-01-01
The paper is devoted to the numerical simulation of a multiphase flow in porous medium with a hysteretic relation between the capillary pressures and the saturations of the phases. The flow model we use is based on Darcy's law. The hysteretic relation between the capillary pressures and the saturations is described by a play-type hysteresis operator. We propose a numerical algorithm for treating the arising system of equations, discuss finite element schemes and present simulation results for the case of two phases.
Numerical simulation for a two-phase porous medium flow problem with rate independent hysteresis
Brokate, M.
2012-05-01
The paper is devoted to the numerical simulation of a multiphase flow in porous medium with a hysteretic relation between the capillary pressures and the saturations of the phases. The flow model we use is based on Darcys law. The hysteretic relation between the capillary pressures and the saturations is described by a play-type hysteresis operator. We propose a numerical algorithm for treating the arising system of equations, discuss finite element schemes and present simulation results for the case of two phases. © 2011 Elsevier B.V. All rights reserved.
Mixed finite element simulations in two-dimensional groundwater flow problems
International Nuclear Information System (INIS)
Kimura, Hideo
1989-01-01
A computer code of groundwater flow in two-dimensional porous media based on the mixed finite element method was developed for accurate approximations of Darcy velocities in safety evaluation of radioactive waste disposal. The mixed finite element procedure solves for both the Darcy velocities and pressure heads simultaneously in the Darcy equation and continuity equation. Numerical results of a single well pumping at a constant rate in a uniform flow field showed that the mixed finite element method gives more accurate Darcy velocities nearly 50 % on average error than standard finite element method. (author)
Directory of Open Access Journals (Sweden)
Seyyed Mohammad Hassan Hosseini
2016-05-01
Full Text Available Scheduling problem for the hybrid flow shop scheduling problem (HFSP followed by an assembly stage considering aging effects additional preventive and maintenance activities is studied in this paper. In this production system, a number of products of different kinds are produced. Each product is assembled with a set of several parts. The first stage is a hybrid flow shop to produce parts. All machines can process all kinds of parts in this stage but each machine can process only one part at the same time. The second stage is a single assembly machine or a single assembly team of workers. The aim is to schedule the parts on the machines and assembly sequence and also determine when the preventive maintenance activities get done in order to minimize the completion time of all products (makespan. A mathematical modeling is presented and its validation is shown by solving an example in small scale. Since this problem has been proved strongly NP-hard, in order to solve the problem in medium and large scale, four heuristic algorithms is proposed based on the Johnson’s algorithm. The numerical experiments are used to run the mathematical model and evaluate the performance of the proposed algorithms.
Canepa, Edward S.; Claudel, Christian G.
2012-01-01
This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. © 2012 IEEE.
Canepa, Edward S.
2012-09-01
This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. © 2012 IEEE.
International Nuclear Information System (INIS)
Anon.
1979-01-01
This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed
The Stochastic Galerkin Method for Darcy Flow Problem with Log-Normal Random
Czech Academy of Sciences Publication Activity Database
Beres, Michal; Domesová, Simona
2017-01-01
Roč. 15, č. 2 (2017), s. 267-279 ISSN 1336-1376 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : Darcy flow * Gaussian random field * Karhunen-Loeve decomposition * polynomial chaos * Stochastic Galerkin method Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://advances.utc.sk/index.php/AEEE/article/view/2280
Numerical methods for limit problems in two-phase flow models
International Nuclear Information System (INIS)
Cordier, F.
2011-01-01
Numerical difficulties are encountered during the simulation of two-phase flows. Two issues are studied in this thesis: the simulation of phase transitions on one hand, and the simulation of both compressible and incompressible flows in the other hand. Un asymptotic study has shown that the loss of hyperbolicity of the bi fluid model was responsible for the difficulties encountered by the Roe scheme during the simulation of phase transitions. Robust and accurate polynomial schemes have thus been developed. To tackle the occasional lack of positivity of the solution, a numerical treatment based on adaptive diffusion was proposed and allowed to simulate with accuracy the test-cases of a boiling channel with creation of vapor and a tee-junction with separation of the phases. In a second part, an all-speed scheme for compressible and incompressible flows have been proposed. This pressure-based semi-implicit asymptotic preserving scheme is conservative, solves an elliptic equation on the pressure, and has been designed for general equations of state. The scheme was first developed for the full Euler equations and then extended to the Navier-Stokes equations. The good behaviour of the scheme in both compressible and incompressible regimes have been investigated. An extension of the scheme to the two-phase mixture model was implemented and demonstrated the ability of the scheme to simulate two-phase flows with phase change and a water-steam equation of state. (author) [fr
Streng, Martin; Streng, M.; ten Cate, Eric; ten Cate, Eric (H.H.); Geurts, Bernardus J.; Kuerten, Johannes G.M.
1998-01-01
We consider several aspects of efficient numerical simulation of viscous compressible flow on both homogeneous and heterogeneous workstation-clusters. We consider dedicated systems, as well as clusters operating in a multi-user environment. For dedicated homogeneous clusters, we show that with
Simultaneous PIV and pulsed shadow technique in slug flow: a solution for optical problems
Energy Technology Data Exchange (ETDEWEB)
Nogueira, S. [Karman Institute for Fluid Dynamics, Chaussee de Waterloo 72, B-1640, Rhode Saint Genese (Belgium); Centro de Estudos de Fenomenos de Transporte, Departamento de Eng. Quimica, Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465, Porto (Portugal); Sousa, R.G.; Pinto, A.M.F.R.; Campos, J.B.L.M. [Centro de Estudos de Fenomenos de Transporte, Departamento de Eng. Quimica, Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465, Porto (Portugal); Riethmuller, M.L. [Karman Institute for Fluid Dynamics, Chaussee de Waterloo 72, B-1640, Rhode Saint Genese (Belgium)
2003-12-01
A recent technique of simultaneous particle image velocimetry (PIV) and pulsed shadow technique (PST) measurements, using only one black and white CCD camera, is successfully applied to the study of slug flow. The experimental facility and the operating principle are described. The technique is applied to study the liquid flow pattern around individual Taylor bubbles rising in an aqueous solution of glycerol with a dynamic viscosity of 113 x 10{sup -3} Pa s. With this technique the optical perturbations found in PIV measurements at the bubble interface are completely solved in the nose and in annular liquid film regions as well as in the rear of the bubble for cases in which the bottom is flat. However, for Taylor bubbles with concave oblate bottoms, some optical distortions appear and are discussed. The measurements achieved a spatial resolution of 0.0022 tube diameters. The results reported show high precision and are in agreement with theoretical and experimental published data. (orig.)
Bana, Péter; Örkényi, Róbert; Lövei, Klára; Lakó, Ágnes; Túrós, György István; Éles, János; Faigl, Ferenc; Greiner, István
2017-12-01
Recent advances in the field of continuous flow chemistry allow the multistep preparation of complex molecules such as APIs (Active Pharmaceutical Ingredients) in a telescoped manner. Numerous examples of laboratory-scale applications are described, which are pointing towards novel manufacturing processes of pharmaceutical compounds, in accordance with recent regulatory, economical and quality guidances. The chemical and technical knowledge gained during these studies is considerable; nevertheless, connecting several individual chemical transformations and the attached analytics and purification holds hidden traps. In this review, we summarize innovative solutions for these challenges, in order to benefit chemists aiming to exploit flow chemistry systems for the synthesis of biologically active molecules. Copyright © 2016 Elsevier Ltd. All rights reserved.
Modeling Thermally Driven Flow Problems with a Grid-Free Vortex Filament Scheme: Part 1
2018-02-01
simulation FMM Fast Multipole Method GPUs graphic processing units LES Large Eddy Simulation M-O Monin-Obukhov MPI Message Passing Interface Re Reynolds...mail.mil>. Grid-free representation of turbulent flow via vortex filaments offers a means for large eddy simulations that faithfully and efficiently...particle, Lagrangian, turbulence, grid-free, large eddy simulation , natural convection, thermal bubble 56 Pat Collins 301-394-5617Unclassified
Problems of two-phase flows in water cooled and moderated reactors
International Nuclear Information System (INIS)
Syu, Yu.
1984-01-01
Heat exchange in two-phase flows of coolant in loss of coolant accidents in PWR and BWR reactors has been investigated. Three main stages of accident history are considered: blowdown, reflooding using emergency core cooling system and rewetting. Factors, determining the rate of coolant leakage and the rate of temperature increase in fuel cladding during blowdown, processes of vapour during reflooding and liquid priming by vapour during rewetting, are discussed
Lattice Boltzmann Model of 3D Multiphase Flow in Artery Bifurcation Aneurysm Problem
Directory of Open Access Journals (Sweden)
Aizat Abas
2016-01-01
Full Text Available This paper simulates and predicts the laminar flow inside the 3D aneurysm geometry, since the hemodynamic situation in the blood vessels is difficult to determine and visualize using standard imaging techniques, for example, magnetic resonance imaging (MRI. Three different types of Lattice Boltzmann (LB models are computed, namely, single relaxation time (SRT, multiple relaxation time (MRT, and regularized BGK models. The results obtained using these different versions of the LB-based code will then be validated with ANSYS FLUENT, a commercially available finite volume- (FV- based CFD solver. The simulated flow profiles that include velocity, pressure, and wall shear stress (WSS are then compared between the two solvers. The predicted outcomes show that all the LB models are comparable and in good agreement with the FVM solver for complex blood flow simulation. The findings also show minor differences in their WSS profiles. The performance of the parallel implementation for each solver is also included and discussed in this paper. In terms of parallelization, it was shown that LBM-based code performed better in terms of the computation time required.
Energy Technology Data Exchange (ETDEWEB)
Shadid, J.N.; Moffat, H.K.; Hutchinson, S.A.; Hennigan, G.L.; Devine, K.D.; Salinger, A.G.
1996-05-01
The theoretical background for the finite element computer program, MPSalsa, is presented in detail. MPSalsa is designed to solve laminar, low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow, heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solver coupled multiple Poisson or advection-diffusion- reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurring in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMKIN, respectively. The code employs unstructured meshes, using the EXODUS II finite element data base suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec solver library.
Application of x-ray microtomography to environmental fluid flow problems
International Nuclear Information System (INIS)
Wildenschild, D.; Culligan, K.A.; Christensen, B.S.B.
2005-01-01
Many environmental processes are controlled by the micro-scale interaction of water and air with the solid phase (soils, sediments, rock) in pore spaces within the subsurface. The distribution in time and space of fluids in pores ultimately controls subsurface flow and contaminant transport relevant to groundwater resource management, contaminant remediation, and agriculture. Many of these physical processes operative at the pore-scale cannot be directly investigated using conventional hydrologic techniques, however recent developments in synchrotron-based micro-imaging have made it possible to observe and quantify pore-scale processes non-invasively. Micron-scale resolution makes it possible to track fluid flow within individual pores and therefore facilitates previously unattainable measurements. We report on experiments performed at the GSECARS** (Advanced Photon Source) microtomography facility and have measured properties such as porosity, fluid saturation and distribution within the pore space, as well as interfacial characteristics of the fluids involved (air, water, contaminant). Different image processing techniques were applied following mathematical reconstruction to produce accurate measurements of the physical flow properties. These new micron-scale measurements make it possible to test existing and new theory, as well as emerging numerical modeling schemes aimed at the pore scale.
Mielke, Alexander
1991-01-01
The theory of center manifold reduction is studied in this monograph in the context of (infinite-dimensional) Hamil- tonian and Lagrangian systems. The aim is to establish a "natural reduction method" for Lagrangian systems to their center manifolds. Nonautonomous problems are considered as well assystems invariant under the action of a Lie group ( including the case of relative equilibria). The theory is applied to elliptic variational problemson cylindrical domains. As a result, all bounded solutions bifurcating from a trivial state can be described by a reduced finite-dimensional variational problem of Lagrangian type. This provides a rigorous justification of rod theory from fully nonlinear three-dimensional elasticity. The book will be of interest to researchers working in classical mechanics, dynamical systems, elliptic variational problems, and continuum mechanics. It begins with the elements of Hamiltonian theory and center manifold reduction in order to make the methods accessible to non-specialists,...
Directory of Open Access Journals (Sweden)
Suprayogi Suprayogi
2016-12-01
Full Text Available This paper considers a location problem in a supply chain network. The problem addressed in this paper is motivated by an initiative to develop an efficient supply chain network for supporting the agricultural activities. The supply chain network consists of regions, warehouses, distribution centers, plants, and markets. The products include a set of inbound products and a set of outbound products. In this paper, definitions of the inbound and outbound products are seen from the region’s point of view. The inbound product is the product demanded by regions and produced by plants which flows on a sequence of the following entities: plants, distribution centers, warehouses, and regions. The outbound product is the product demanded by markets and produced by regions and it flows on a sequence of the following entities: regions, warehouses, and markets. The problem deals with determining locations of the warehouses and the distribution centers to be opened and shipment quantities associated with all links on the network that minimizes the total cost. The problem can be considered as a strategic supply chain network problem. A solution approach based on genetic algorithm (GA is proposed. The proposed GA is examined using hypothetical instances and its results are compared to the solution obtained by solving the mixed integer linear programming (MILP model. The comparison shows that there is a small gap (0.23%, on average between the proposed GA and MILP model in terms of the total cost. The proposed GA consistently provides solutions with least total cost. In terms of total cost, based on the experiment, it is demonstrated that coefficients of variation are closed to 0.
On the solution of fluid flow and heat transfer problem in a 2D channel with backward-facing step
Directory of Open Access Journals (Sweden)
Alexander A. Fomin
2017-06-01
Full Text Available The stable stationary solutions of the test problem of hydrodynamics and heat transfer in a plane channel with the backward-facing step have been considered in the work for extremely high Reynolds numbers and expansion ratio of the stream $ER$. The problem has been solved by numerical integration of the 2D Navier–Stokes equations in ‘velocity-pressure’ formulation and the heat equation in the range of Reynolds number $500 \\leqslant \\mathrm{ Re} \\leqslant 3000$ and expansion ratio $1.43 \\leqslant ER \\leqslant 10$ for Prandtl number $\\mathrm{ Pr} = 0.71$. Validity of the results has been confirmed by comparing them with literature data. Detailed flow patterns, fields of stream overheating, and profiles of horizontal component of velocity and relative overheating of flow in the cross section of the channel have been presented. Complex behaviors of the coefficients of friction, hydrodynamic resistance and heat transfer (Nusselt number along the channel depending on the problem parameters have been analyzed.
ethod of straight lines for a Bingham problem as a model for the flow of waxy crude oils
Directory of Open Access Journals (Sweden)
German Ariel Torres
2005-11-01
Full Text Available In this work, we develop a method of straight lines for solving a Bingham problem that models the flow of waxy crude oils. The model describes the flow of mineral oils with a high content of paraffin at temperatures below the cloud point (i.e. the crystallization temperature of paraffin and more specifically below the pour point at which the crystals aggregate themselves and the oil takes a jell-like structure. From the rheological point of view such a system can be modelled as a Bingham fluid whose parameters evolve according to the volume fractions of crystallized paraffin and the aggregation degree of crystals. We prove that the method is well defined for all times, a monotone property, qualitative behaviour of the solution, and a convergence theorem. The results are compared with numerical experiments at the end of this article.
DEFF Research Database (Denmark)
Ganji, S. S.; Barari, Amin; Ibsen, Lars Bo
2010-01-01
. In current research the authors utilized the Differential Transformation Method (DTM) for solving the nonlinear problem and compared the analytical results with those ones obtained by the 4th order Runge-Kutta Method (RK4) as a numerical method. Further illustration embedded in this paper shows the ability...
DEFF Research Database (Denmark)
Ganji, S.; Barari, Amin; Ibsen, Lars Bo
2012-01-01
. In current research the authors utilized the Differential Transformation Method (DTM) for solving the nonlinear problem and compared the analytical results with those ones obtained by the 4th order Runge-Kutta Method (RK4) as a numerical method. Further illustration embedded in this paper shows the ability...
Some Considerations on the Problem of Non-Steady State Traffic Flow Optimization
2007-01-01
Poor traffic signal timing accounts for an estimated 10 percent of all traffic delay about 300 million vehicle-hours on major roadways alone. Americans agree that this is a problem: one U.S. Department of Transportation (DOT) survey found tha...
Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems
Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros
2015-04-01
In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the
NAMMU: finite element program for coupled heat and groundwater flow problems
International Nuclear Information System (INIS)
Rae, J.; Robinson, P.C.
1979-11-01
NAMMU is a computer program which will calculate the evolution in time of coupled water and heat flow in a porous medium. It is intended to be used primarily for modelling studies of underground nuclear waste repositories. NAMMU is based on the Galerkin-Finite-element method and has self-adjusting time stepping. The present version is written for 2-dimensional cartesian or cylindrical coordinate systems. It has been checked against two calculations from the KBS study and an exact solution by Hodgkinson for a very idealised repository design. (author)
Søe-Knudsen, Alf; Sorokin, Sergey
2011-06-01
This rapid communication is concerned with justification of the 'rule of thumb', which is well known to the community of users of the finite element (FE) method in dynamics, for the accuracy assessment of the wave finite element (WFE) method. An explicit formula linking the size of a window in the dispersion diagram, where the WFE method is trustworthy, with the coarseness of a FE mesh employed is derived. It is obtained by the comparison of the exact Pochhammer-Chree solution for an elastic rod having the circular cross-section with its WFE approximations. It is shown that the WFE power flow predictions are also valid within this window.
Directory of Open Access Journals (Sweden)
Johan Soewanda
2007-01-01
Full Text Available This paper discusses the application of Robust Hybrid Genetic Algorithm to solve a flow-shop scheduling problem. The proposed algorithm attempted to reach minimum makespan. PT. FSCM Manufacturing Indonesia Plant 4's case was used as a test case to evaluate the performance of the proposed algorithm. The proposed algorithm was compared to Ant Colony, Genetic-Tabu, Hybrid Genetic Algorithm, and the company's algorithm. We found that Robust Hybrid Genetic produces statistically better result than the company's, but the same as Ant Colony, Genetic-Tabu, and Hybrid Genetic. In addition, Robust Hybrid Genetic Algorithm required less computational time than Hybrid Genetic Algorithm
Reduced-Contrast Approximations for High-Contrast Multiscale Flow Problems
Chung, Eric T.; Efendiev, Yalchin
2010-01-01
In this paper, we study multiscale methods for high-contrast elliptic problems where the media properties change dramatically. The disparity in the media properties (also referred to as high contrast in the paper) introduces an additional scale that needs to be resolved in multiscale simulations. First, we present a construction that uses an integral equation to represent the highcontrast component of the solution. This representation involves solving an integral equation along the interface where the coefficients are discontinuous. The integral representation suggests some multiscale approaches that are discussed in the paper. One of these approaches entails the use of interface functions in addition to multiscale basis functions representing the heterogeneities without high contrast. In this paper, we propose an approximation for the solution of the integral equation using the interface problems in reduced-contrast media. Reduced-contrast media are obtained by lowering the variance of the coefficients. We also propose a similar approach for the solution of the elliptic equation without using an integral representation. This approach is simpler to use in the computations because it does not involve setting up integral equations. The main idea of this approach is to approximate the solution of the high-contrast problem by the solutions of the problems formulated in reduced-contrast media. In this approach, a rapidly converging sequence is proposed where only problems with lower contrast are solved. It was shown that this sequence possesses the convergence rate that is inversely proportional to the reduced contrast. This approximation allows choosing the reduced-contrast problem based on the coarse-mesh size as discussed in this paper. We present a simple application of this approach to homogenization of elliptic equations with high-contrast coefficients. The presented approaches are limited to the cases where there are sharp changes in the contrast (i.e., the high
A free boundary problem describing the saturated-unsaturated flow in a porous medium
Directory of Open Access Journals (Sweden)
Gabriela Marinoschi
2004-01-01
Full Text Available This paper presents a functional approach to a nonlinear model describing the complete physical process of water infiltration into an unsaturated soil, including the saturation occurrence and the advance of the wetting front. The model introduced in this paper involves a multivalued operator covering the simultaneous saturated and unsaturated flow behaviors and enhances the study of the displacement of the free boundary between these two flow regimes. The model resides in Richards' equation written in pressure form with an initial condition and boundary conditions which in this work express the inflow due to the rain on the soil surface on the one hand, and characterize a certain permeability corresponding to the underground boundary, on the other hand. Existence, uniqueness, and regularity results for the transformed model in diffusive form, that is, for the moisture of the soil, and the existence of the weak solution for the pressure form are proved in the 3D case. The main part of the paper focuses on the existence of the free boundary between the saturated and unsaturated parts of the soil, and this is proved, in the 1D case, for certain stronger assumptions on the initial data and boundary conditions.
Two dimensional heat transfer problem in flow boiling in a rectangular minichannel
Directory of Open Access Journals (Sweden)
Hożejowska Sylwia
2015-01-01
Full Text Available The paper presents mathematical modelling of flow boiling heat transfer in a rectangular minichannel asymmetrically heated by a thin and one-sided enhanced foil. Both surfaces are available for observations due to the openings covered with glass sheets. Thus, changes in the colour of the plain foil surface can be registered and then processed. Plain side of the heating foil is covered with a base coat and liquid crystal paint. Observation of the opposite, enhanced surface of the minichannel allows for identification of the gas-liquid two-phase flow patterns and vapour quality. A two-dimensional mathematical model of heat transfer in three subsequent layers (sheet glass, heating foil, liquid was proposed. Heat transfer in all these layers was described with the respective equations: Laplace equation, Poisson equation and energy equation, subject to boundary conditions corresponding to the observed physical process. The solutions (temperature distributions in all three layers were obtained by Trefftz method. Additionally, the temperature of the boiling liquid was obtained by homotopy perturbation method (HPM combined with Trefftz method. The heat transfer coefficient, derived from Robin boundary condition, was estimated in both approaches. In comparison, the results by both methods show very good agreement especially when restricted to the thermal sublayer.
Design solutions to interface flow problems: Text - List of symbols - References
International Nuclear Information System (INIS)
1986-01-01
All published proposals for the deep level burial of radioactive waste recognise that the access shafts, tunnels and boreholes must be sealed, and that the sealing of these openings plays an integral role in the overall isolation of the waste. Previous studies have identified the interface between the host ground formation and the various sealing materials as potential defects in the overall quality of the waste isolation. The significance of groundwater flow at and near the interface has been assessed for representative conditions in generic repository materials. A range of design options to minimise the significance of flow in the interface zone have been proposed, and the most practical of these options have been selected for quantitative analysis. It has been found that isolated high impermeability collars are of limited value unless a highly effective method of minimising ground disturbance during excavation can be developed. It has also been found that control of radionuclide migration by sorptive processes provides an attractive option. The effect of various geometrical arrangements of sorptive materials has been investigated. Consideration has also been given to the particular conditions in the near field, to the behaviour of weak plastic clay host formations and to the mechanical interaction between the backfill material and the host formation
Mathematical modelling and numerical resolution of multi-phase compressible fluid flows problems
International Nuclear Information System (INIS)
Lagoutiere, Frederic
2000-01-01
This work deals with Eulerian compressible multi-species fluid dynamics, the species being either mixed or separated (with interfaces). The document is composed of three parts. The first parts devoted to the numerical resolution of model problems: advection equation, Burgers equation, and Euler equations, in dimensions one and two. The goal is to find a precise method, especially for discontinuous initial conditions, and we develop non dissipative algorithms. They are based on a downwind finite-volume discretization under some stability constraints. The second part treats of the mathematical modelling of fluids mixtures. We construct and analyse a set of multi-temperature and multi-pressure models that are entropy, symmetrizable, hyperbolic, not ever conservative. In the third part, we apply the ideas developed in the first part (downwind discretization) to the numerical resolution of the partial differential problems we have constructed for fluids mixtures in the second part. We present some numerical results in dimensions one and two. (author) [fr
Cholet, Cybèle; Charlier, Jean-Baptiste; Moussa, Roger; Steinmann, Marc; Denimal, Sophie
2017-07-01
The aim of this study is to present a framework that provides new ways to characterize the spatio-temporal variability of lateral exchanges for water flow and solute transport in a karst conduit network during flood events, treating both the diffusive wave equation and the advection-diffusion equation with the same mathematical approach, assuming uniform lateral flow and solute transport. A solution to the inverse problem for the advection-diffusion equations is then applied to data from two successive gauging stations to simulate flows and solute exchange dynamics after recharge. The study site is the karst conduit network of the Fourbanne aquifer in the French Jura Mountains, which includes two reaches characterizing the network from sinkhole to cave stream to the spring. The model is applied, after separation of the base from the flood components, on discharge and total dissolved solids (TDSs) in order to assess lateral flows and solute concentrations and compare them to help identify water origin. The results showed various lateral contributions in space - between the two reaches located in the unsaturated zone (R1), and in the zone that is both unsaturated and saturated (R2) - as well as in time, according to hydrological conditions. Globally, the two reaches show a distinct response to flood routing, with important lateral inflows on R1 and large outflows on R2. By combining these results with solute exchanges and the analysis of flood routing parameters distribution, we showed that lateral inflows on R1 are the addition of diffuse infiltration (observed whatever the hydrological conditions) and localized infiltration in the secondary conduit network (tributaries) in the unsaturated zone, except in extreme dry periods. On R2, despite inflows on the base component, lateral outflows are observed during floods. This pattern was attributed to the concept of reversal flows of conduit-matrix exchanges, inducing a complex water mixing effect in the saturated zone
Directory of Open Access Journals (Sweden)
C. Cholet
2017-07-01
Full Text Available The aim of this study is to present a framework that provides new ways to characterize the spatio-temporal variability of lateral exchanges for water flow and solute transport in a karst conduit network during flood events, treating both the diffusive wave equation and the advection–diffusion equation with the same mathematical approach, assuming uniform lateral flow and solute transport. A solution to the inverse problem for the advection–diffusion equations is then applied to data from two successive gauging stations to simulate flows and solute exchange dynamics after recharge. The study site is the karst conduit network of the Fourbanne aquifer in the French Jura Mountains, which includes two reaches characterizing the network from sinkhole to cave stream to the spring. The model is applied, after separation of the base from the flood components, on discharge and total dissolved solids (TDSs in order to assess lateral flows and solute concentrations and compare them to help identify water origin. The results showed various lateral contributions in space – between the two reaches located in the unsaturated zone (R1, and in the zone that is both unsaturated and saturated (R2 – as well as in time, according to hydrological conditions. Globally, the two reaches show a distinct response to flood routing, with important lateral inflows on R1 and large outflows on R2. By combining these results with solute exchanges and the analysis of flood routing parameters distribution, we showed that lateral inflows on R1 are the addition of diffuse infiltration (observed whatever the hydrological conditions and localized infiltration in the secondary conduit network (tributaries in the unsaturated zone, except in extreme dry periods. On R2, despite inflows on the base component, lateral outflows are observed during floods. This pattern was attributed to the concept of reversal flows of conduit–matrix exchanges, inducing a complex water mixing effect
Institute of Scientific and Technical Information of China (English)
毕春加
2005-01-01
In this paper, we establish the maximum norm estimates of the solutions of the finite volume element method (FVE) based on the P1 conforming element for the non-selfadjoint and indefinite elliptic problems.
Detection technique of radioactive tracer and its application to the flow problems
International Nuclear Information System (INIS)
Sato, Otomaru; Kato, Masao
1978-01-01
With a radioactive tracer experiment the nature of the system and the precision are the two key factors to determine the amount of the required tracer. It should be kept as low as possible to meet environmental regulations. The former factor is concerned with the isotope dilution during the experiment and the latter with counting techniques. In part 1, some counting techniques are investigated while three field experiments are described in part 2. Chemical treatments of water sample are described firstly in part 1. Recovery of the order of 95% was achieved with 24 Na, 131 I and 82 Br by either ion exchange or precipitation technique. Three direct γ-ray counting techniques are investigated secondly, e.g. dip counting method, pipe counting technique, and plane source counting technique. Thirdly, counting characteristics of a moving radioactive source was investigated. A small source was stuck on a moving belt and the center of a GM tube was faced to the belt. The counting rates with or without a collimator were analyzed using a simple equation. In part 2, the first experiment is on the flow rate of the Sorachi river in summer 1961. Measurements by an underwater detector and from periodically collected samples were compared at every observing stations. The second experiment was on the sorption loss of the isotopes in the river in 1963. Very little sorption loss was recognized with 82 Br, while a sorption loss of 10% was found with 24 Na after 6 km downflow. Isotopes were found to mix transversely after 7 to 10 km flow. The third experiment is concerned with the investigation on the movement of sediments at Okuma coast in Fukushima prefecture. (J.P.N.)
Energy Technology Data Exchange (ETDEWEB)
Jafarzadeh, Hassan; Moradinasab, Nazanin; Gerami, Ali
2017-07-01
Adjusted discrete Multi-Objective Invasive Weed Optimization (DMOIWO) algorithm, which uses fuzzy dominant approach for ordering, has been proposed to solve No-wait two-stage flexible flow shop scheduling problem. Design/methodology/approach: No-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times and probable rework in both stations, different ready times for all jobs and rework times for both stations as well as unrelated parallel machines with regards to the simultaneous minimization of maximum job completion time and average latency functions have been investigated in a multi-objective manner. In this study, the parameter setting has been carried out using Taguchi Method based on the quality indicator for beater performance of the algorithm. Findings: The results of this algorithm have been compared with those of conventional, multi-objective algorithms to show the better performance of the proposed algorithm. The results clearly indicated the greater performance of the proposed algorithm. Originality/value: This study provides an efficient method for solving multi objective no-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times, probable rework in both stations, different ready times for all jobs, rework times for both stations and unrelated parallel machines which are the real constraints.
International Nuclear Information System (INIS)
Jafarzadeh, Hassan; Moradinasab, Nazanin; Gerami, Ali
2017-01-01
Adjusted discrete Multi-Objective Invasive Weed Optimization (DMOIWO) algorithm, which uses fuzzy dominant approach for ordering, has been proposed to solve No-wait two-stage flexible flow shop scheduling problem. Design/methodology/approach: No-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times and probable rework in both stations, different ready times for all jobs and rework times for both stations as well as unrelated parallel machines with regards to the simultaneous minimization of maximum job completion time and average latency functions have been investigated in a multi-objective manner. In this study, the parameter setting has been carried out using Taguchi Method based on the quality indicator for beater performance of the algorithm. Findings: The results of this algorithm have been compared with those of conventional, multi-objective algorithms to show the better performance of the proposed algorithm. The results clearly indicated the greater performance of the proposed algorithm. Originality/value: This study provides an efficient method for solving multi objective no-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times, probable rework in both stations, different ready times for all jobs, rework times for both stations and unrelated parallel machines which are the real constraints.
Tice, Ian
2018-04-01
This paper concerns the dynamics of a layer of incompressible viscous fluid lying above a rigid plane and with an upper boundary given by a free surface. The fluid is subject to a constant external force with a horizontal component, which arises in modeling the motion of such a fluid down an inclined plane, after a coordinate change. We consider the problem both with and without surface tension for horizontally periodic flows. This problem gives rise to shear-flow equilibrium solutions, and the main thrust of this paper is to study the asymptotic stability of the equilibria in certain parameter regimes. We prove that there exists a parameter regime in which sufficiently small perturbations of the equilibrium at time t=0 give rise to global-in-time solutions that return to equilibrium exponentially in the case with surface tension and almost exponentially in the case without surface tension. We also establish a vanishing surface tension limit, which connects the solutions with and without surface tension.
Yang, Haijian
2016-07-26
Fully implicit methods are drawing more attention in scientific and engineering applications due to the allowance of large time steps in extreme-scale simulations. When using a fully implicit method to solve two-phase flow problems in porous media, one major challenge is the solution of the resultant nonlinear system at each time step. To solve such nonlinear systems, traditional nonlinear iterative methods, such as the class of the Newton methods, often fail to achieve the desired convergent rate due to the high nonlinearity of the system and/or the violation of the boundedness requirement of the saturation. In the paper, we reformulate the two-phase model as a variational inequality that naturally ensures the physical feasibility of the saturation variable. The variational inequality is then solved by an active-set reduced-space method with a nonlinear elimination preconditioner to remove the high nonlinear components that often causes the failure of the nonlinear iteration for convergence. To validate the effectiveness of the proposed method, we compare it with the classical implicit pressure-explicit saturation method for two-phase flow problems with strong heterogeneity. The numerical results show that our nonlinear solver overcomes the often severe limits on the time step associated with existing methods, results in superior convergence performance, and achieves reduction in the total computing time by more than one order of magnitude.
Yang, Haijian; Yang, Chao; Sun, Shuyu
2016-01-01
Fully implicit methods are drawing more attention in scientific and engineering applications due to the allowance of large time steps in extreme-scale simulations. When using a fully implicit method to solve two-phase flow problems in porous media, one major challenge is the solution of the resultant nonlinear system at each time step. To solve such nonlinear systems, traditional nonlinear iterative methods, such as the class of the Newton methods, often fail to achieve the desired convergent rate due to the high nonlinearity of the system and/or the violation of the boundedness requirement of the saturation. In the paper, we reformulate the two-phase model as a variational inequality that naturally ensures the physical feasibility of the saturation variable. The variational inequality is then solved by an active-set reduced-space method with a nonlinear elimination preconditioner to remove the high nonlinear components that often causes the failure of the nonlinear iteration for convergence. To validate the effectiveness of the proposed method, we compare it with the classical implicit pressure-explicit saturation method for two-phase flow problems with strong heterogeneity. The numerical results show that our nonlinear solver overcomes the often severe limits on the time step associated with existing methods, results in superior convergence performance, and achieves reduction in the total computing time by more than one order of magnitude.
Maximum Acceleration Recording Circuit
Bozeman, Richard J., Jr.
1995-01-01
Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.
Directory of Open Access Journals (Sweden)
Farahmand-Mehr Mohammad
2014-01-01
Full Text Available In this paper, a hybrid flow shop scheduling problem with a new approach considering time lags and sequence-dependent setup time in realistic situations is presented. Since few works have been implemented in this field, the necessity of finding better solutions is a motivation to extend heuristic or meta-heuristic algorithms. This type of production system is found in industries such as food processing, chemical, textile, metallurgical, printed circuit board, and automobile manufacturing. A mixed integer linear programming (MILP model is proposed to minimize the makespan. Since this problem is known as NP-Hard class, a meta-heuristic algorithm, named Genetic Algorithm (GA, and three heuristic algorithms (Johnson, SPTCH and Palmer are proposed. Numerical experiments of different sizes are implemented to evaluate the performance of presented mathematical programming model and the designed GA in compare to heuristic algorithms and a benchmark algorithm. Computational results indicate that the designed GA can produce near optimal solutions in a short computational time for different size problems.
Botti, L.; Colombo, A.; Bassi, F.
2017-10-01
In this work we exploit agglomeration based h-multigrid preconditioners to speed-up the iterative solution of discontinuous Galerkin discretizations of the Stokes and Navier-Stokes equations. As a distinctive feature h-coarsened mesh sequences are generated by recursive agglomeration of a fine grid, admitting arbitrarily unstructured grids of complex domains, and agglomeration based discontinuous Galerkin discretizations are employed to deal with agglomerated elements of coarse levels. Both the expense of building coarse grid operators and the performance of the resulting multigrid iteration are investigated. For the sake of efficiency coarse grid operators are inherited through element-by-element L2 projections, avoiding the cost of numerical integration over agglomerated elements. Specific care is devoted to the projection of viscous terms discretized by means of the BR2 dG method. We demonstrate that enforcing the correct amount of stabilization on coarse grids levels is mandatory for achieving uniform convergence with respect to the number of levels. The numerical solution of steady and unsteady, linear and non-linear problems is considered tackling challenging 2D test cases and 3D real life computations on parallel architectures. Significant execution time gains are documented.
Maximum Quantum Entropy Method
Sim, Jae-Hoon; Han, Myung Joon
2018-01-01
Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...
Flows in networks under fuzzy conditions
Bozhenyuk, Alexander Vitalievich; Kacprzyk, Janusz; Rozenberg, Igor Naymovich
2017-01-01
This book offers a comprehensive introduction to fuzzy methods for solving flow tasks in both transportation and networks. It analyzes the problems of minimum cost and maximum flow finding with fuzzy nonzero lower flow bounds, and describes solutions to minimum cost flow finding in a network with fuzzy arc capacities and transmission costs. After a concise introduction to flow theory and tasks, the book analyzes two important problems. The first is related to determining the maximum volume for cargo transportation in the presence of uncertain network parameters, such as environmental changes, measurement errors and repair work on the roads. These parameters are represented here as fuzzy triangular, trapezoidal numbers and intervals. The second problem concerns static and dynamic flow finding in networks under fuzzy conditions, and an effective method that takes into account the network’s transit parameters is presented here. All in all, the book provides readers with a practical reference guide to state-of-...
Neutron spectra unfolding with maximum entropy and maximum likelihood
International Nuclear Information System (INIS)
Itoh, Shikoh; Tsunoda, Toshiharu
1989-01-01
A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)
Directory of Open Access Journals (Sweden)
Jiuping Xu
2012-01-01
Full Text Available The aim of this study is to deal with a minimum cost network flow problem (MCNFP in a large-scale construction project using a nonlinear multiobjective bilevel model with birandom variables. The main target of the upper level is to minimize both direct and transportation time costs. The target of the lower level is to minimize transportation costs. After an analysis of the birandom variables, an expectation multiobjective bilevel programming model with chance constraints is formulated to incorporate decision makers’ preferences. To solve the identified special conditions, an equivalent crisp model is proposed with an additional multiobjective bilevel particle swarm optimization (MOBLPSO developed to solve the model. The Shuibuya Hydropower Project is used as a real-world example to verify the proposed approach. Results and analysis are presented to highlight the performances of the MOBLPSO, which is very effective and efficient compared to a genetic algorithm and a simulated annealing algorithm.
Mezentsev, Yu A.; Baranova, N. V.
2018-05-01
A universal economical and mathematical model designed for determination of optimal strategies for managing subsystems (components of subsystems) of production and logistics of enterprises is considered. Declared universality allows taking into account on the system level both production components, including limitations on the ways of converting raw materials and components into sold goods, as well as resource and logical restrictions on input and output material flows. The presented model and generated control problems are developed within the framework of the unified approach that allows one to implement logical conditions of any complexity and to define corresponding formal optimization tasks. Conceptual meaning of used criteria and limitations are explained. The belonging of the generated tasks of the mixed programming with the class of NP is shown. An approximate polynomial algorithm for solving the posed optimization tasks for mixed programming of real dimension with high computational complexity is proposed. Results of testing the algorithm on the tasks in a wide range of dimensions are presented.
Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai
1998-01-01
This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.
Energy Technology Data Exchange (ETDEWEB)
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-04-01
The phase appearance/disappearance issue presents serious numerical challenges in two-phase flow simulations. Many existing reactor safety analysis codes use different kinds of treatments for the phase appearance/disappearance problem. However, to our best knowledge, there are no fully satisfactory solutions. Additionally, the majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many situations, it is desirable to use high-resolution spatial discretization and fully implicit time integration schemes to reduce numerical errors. In this work, we adapted a high-resolution spatial discretization scheme on staggered grid mesh and fully implicit time integration methods (such as BDF1 and BDF2) to solve the two-phase flow problems. The discretized nonlinear system was solved by the Jacobian-free Newton Krylov (JFNK) method, which does not require the derivation and implementation of analytical Jacobian matrix. These methods were tested with a few two-phase flow problems with phase appearance/disappearance phenomena considered, such as a linear advection problem, an oscillating manometer problem, and a sedimentation problem. The JFNK method demonstrated extremely robust and stable behaviors in solving the two-phase flow problems with phase appearance/disappearance. No special treatments such as water level tracking or void fraction limiting were used. High-resolution spatial discretization and second- order fully implicit method also demonstrated their capabilities in significantly reducing numerical errors.
On Maximum Entropy and Inference
Directory of Open Access Journals (Sweden)
Luigi Gresele
2017-11-01
Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Robust Maximum Association Estimators
A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)
2017-01-01
textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation
Generic maximum likely scale selection
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...
Fiedler, Heinrich E.
1991-01-01
Recent works on flow stability and turbulence are reviewed with emphasis on the flow control of free and wall-bounded flows. Axisymmetric jets in counterflow are considered for two characteristic cases: a stable case at low velocity ratios and an unstable case at higher velocity ratios. Among mixing layers, excited layers are covered as well as density-inhomogeneous flows, where countergradient, homogeneous, and cogradient cases are reviewed. The influences of boundary conditions are analyzed, and focus is placed on feedback condition, flow distortion, accelerated flow, and two- and three-dimensional studies. Attention is given to stability investigations and riblets as a means for reducing surface friction in a turbulent flow.
Optimal Control of Polymer Flooding Based on Maximum Principle
Directory of Open Access Journals (Sweden)
Yang Lei
2012-01-01
Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and the inequality constraint as the polymer concentration limitation. To cope with the optimal control problem (OCP of this DPS, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s weak maximum principle. A gradient method is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.
International Nuclear Information System (INIS)
Enslin, J.H.R.
1990-01-01
A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control
International Nuclear Information System (INIS)
Azadeh, A.; Maleki Shoja, B.; Ghanei, S.; Sheikhalishahi, M.
2015-01-01
This research investigates a redundancy-scheduling optimization problem for a multi-state series parallel system. The system is a flow shop manufacturing system with multi-state machines. Each manufacturing machine may have different performance rates including perfect performance, decreased performance and complete failure. Moreover, warm standby redundancy is considered for the redundancy allocation problem. Three objectives are considered for the problem: (1) minimizing system purchasing cost, (2) minimizing makespan, and (3) maximizing system reliability. Universal generating function is employed to evaluate system performance and overall reliability of the system. Since the problem is in the NP-hard class of combinatorial problems, genetic algorithm (GA) is used to find optimal/near optimal solutions. Different test problems are generated to evaluate the effectiveness and efficiency of proposed approach and compared to simulated annealing optimization method. The results show the proposed approach is capable of finding optimal/near optimal solution within a very reasonable time. - Highlights: • A redundancy-scheduling optimization problem for a multi-state series parallel system. • A flow shop with multi-state machines and warm standby redundancy. • Objectives are to optimize system purchasing cost, makespan and reliability. • Different test problems are generated and evaluated by a unique genetic algorithm. • It locates optimal/near optimal solution within a very reasonable time
Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.
2009-01-01
We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.
Hardness and Approximation for Network Flow Interdiction
Chestnut, Stephen R.; Zenklusen, Rico
2015-01-01
In the Network Flow Interdiction problem an adversary attacks a network in order to minimize the maximum s-t-flow. Very little is known about the approximatibility of this problem despite decades of interest in it. We present the first approximation hardness, showing that Network Flow Interdiction and several of its variants cannot be much easier to approximate than Densest k-Subgraph. In particular, any $n^{o(1)}$-approximation algorithm for Network Flow Interdiction would imply an $n^{o(1)}...
Directory of Open Access Journals (Sweden)
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Danforth, Jeffrey S
2016-03-01
Behavioral parent training is an evidence-based treatment for problem behavior described as attention-deficit hyperactivity disorder (ADHD), oppositional defiant disorder, and conduct disorder. However, adherence to treatment fidelity and parent performance of the management skills remains an obstacle to optimum outcome. One variable that may limit the effectiveness of the parent training is that demanding behavior management procedures can be deceptively complicated and difficult to perform. Based on outcome research for families of children with co-occurring ADHD and conduct problem behavior, an example of a visual behavior management flow chart is presented. The flow chart may be used to help teach specific behavior management skills to parents. The flow chart depicts a chain of behavior management strategies taught with explanation, modeling, and role-play with parents. The chained steps in the flow chart are elements common to well-known evidence-based behavior management strategies, and perhaps, this depiction well serve as a setting event for other behavior analysts to create flow charts for their own parent training, Details of the flow chart steps, as well as examples of specific applications and program modifications conclude.
Maximum neutron flux in thermal reactors
International Nuclear Information System (INIS)
Strugar, P.V.
1968-12-01
Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples
Probable maximum flood control
International Nuclear Information System (INIS)
DeGabriele, C.E.; Wu, C.L.
1991-11-01
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
International Nuclear Information System (INIS)
Rust, D.M.
1984-01-01
The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references
Functional Maximum Autocorrelation Factors
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
Liu, Chen; Wang, Qinxue; Yang, Yonghui; Wang, Kelin; Ouyang, Zhu; Li, Yan; Lei, Alin; Yasunari, Tetsuzo
2012-03-30
To diagnose problems that threaten regional sustainability and to devise appropriate treatment measures in China's agro-ecosystems, a study was carried out to quantify the nitrogen (N) flow in China's typical agro-ecosystems and develop potential solutions to the increasing environmental N load. The analysis showed that owing to human activity in the agro-ecosystems of Changjiang River Basin the mean total input of anthropogenic reactive N (i.e. chemical fertiliser, atmospheric deposition and bio-N fixation) increased from 4.41 × 10(9) kg-N in 1980 to 7.61 × 10(9) kg-N in 1990 and then to 1.43 × 10(10) kg-N in 2000, with chemical fertiliser N being the largest contributor to N load. Field investigation further showed that changes in human behaviour and rural urbanisation have caused rural communities to become more dependent on chemical fertilisers. In rural regions, around 4.17 kg-N of per capita annual potential N load as excrement was returned to farmlands and 1.38 kg-N directly discharged into river systems, while in urbanised regions, around 1.00 kg-N of per capita annual potential N load as excrement was returned to farmlands and 5.62 kg-N discharged into river systems in urban areas. The findings of the study suggest that human activities have significantly altered the N cycle in agro-ecosystems of China. With high population density and scarce per capita water resources, non-point source pollution from agro-ecosystems continues to put pressure on aquatic ecosystems. Increasing the rate of organic matter recycling and fertiliser efficiency with limited reliance on chemical fertilisers might yield tremendous environmental benefits. Copyright © 2011 Society of Chemical Industry.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Energy Technology Data Exchange (ETDEWEB)
Krukovsky, P G [Institute of Engineering Thermophysics, National Academy of Sciences of Ukraine, Kiev (Ukraine)
1998-12-31
The description of method and software FRIEND which provide a possibility of solution of inverse and inverse design problems on the basis of existing (base) CFD-software for solution of direct problems (in particular, heat-transfer and fluid-flow problems using software PHOENICS) are presented. FRIEND is an independent additional module that widens the operational capacities of the base software unified with this module. This unifying does not require any change or addition to the base software. Interfacing of FRIEND and the base software takes place through input and output files of the base software. A brief description of the computational technique applied for the inverse problem solution, same detailed information on the interfacing of FRIEND and CFD-software and solution results for testing inverse and inverse design problems, obtained using the tandem CFD-software PHOENICS and FRIEND, are presented. (author) 9 refs.
Energy Technology Data Exchange (ETDEWEB)
Krukovsky, P.G. [Institute of Engineering Thermophysics, National Academy of Sciences of Ukraine, Kiev (Ukraine)
1997-12-31
The description of method and software FRIEND which provide a possibility of solution of inverse and inverse design problems on the basis of existing (base) CFD-software for solution of direct problems (in particular, heat-transfer and fluid-flow problems using software PHOENICS) are presented. FRIEND is an independent additional module that widens the operational capacities of the base software unified with this module. This unifying does not require any change or addition to the base software. Interfacing of FRIEND and the base software takes place through input and output files of the base software. A brief description of the computational technique applied for the inverse problem solution, same detailed information on the interfacing of FRIEND and CFD-software and solution results for testing inverse and inverse design problems, obtained using the tandem CFD-software PHOENICS and FRIEND, are presented. (author) 9 refs.
Weighted Maximum-Clique Transversal Sets of Graphs
Chuan-Min Lee
2011-01-01
A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...
Flow area optimization in point to area or area to point flows
International Nuclear Information System (INIS)
Ghodoossi, Lotfollah; Egrican, Niluefer
2003-01-01
This paper deals with the constructal theory of generation of shape and structure in flow systems connecting one point to a finite size area. The flow direction may be either from the point to the area or the area to the point. The formulation of the problem remains the same if the flow direction is reversed. Two models are used in optimization of the point to area or area to point flow problem: cost minimization and revenue maximization. The cost minimization model enables one to predict the shape of the optimized flow areas, but the geometric sizes of the flow areas are not predictable. That is, as an example, if the area of flow is a rectangle with a fixed area size, optimization of the point to area or area to point flow problem by using the cost minimization model will only predict the height/length ratio of the rectangle not the height and length itself. By using the revenue maximization model in optimization of the flow problems, all optimized geometric aspects of the interested flow areas will be derived as well. The aim of this paper is to optimize the point to area or area to point flow problems in various elemental flow area shapes and various structures of the flow system (various combinations of elemental flow areas) by using the revenue maximization model. The elemental flow area shapes used in this paper are either rectangular or triangular. The forms of the flow area structure, made up of an assembly of optimized elemental flow areas to obtain bigger flow areas, are rectangle-in-rectangle, rectangle-in-triangle, triangle-in-triangle and triangle-in-rectangle. The global maximum revenue, revenue collected per unit flow area and the shape and sizes of each flow area structure have been derived in optimized conditions. The results for each flow area structure have been compared with the results of the other structures to determine the structure that provides better performance. The conclusion is that the rectangle-in-triangle flow area structure
International Nuclear Information System (INIS)
Ryan, J.
1981-01-01
By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments
Directory of Open Access Journals (Sweden)
Ilyas Khan
Full Text Available The present work is concerned with exact solutions of Stokes second problem for magnetohydrodynamics (MHD flow of a Burgers' fluid. The fluid over a flat plate is assumed to be electrically conducting in the presence of a uniform magnetic field applied in outward transverse direction to the flow. The equations governing the flow are modeled and then solved using the Laplace transform technique. The expressions of velocity field and tangential stress are developed when the relaxation time satisfies the condition γ = λ²/4 or γ> λ²/4. The obtained closed form solutions are presented in the form of simple or multiple integrals in terms of Bessel functions and terms with only Bessel functions. The numerical integration is performed and the graphical results are displayed for the involved flow parameters. It is found that the velocity decreases whereas the shear stress increases when the Hartmann number is increased. The solutions corresponding to the Stokes' first problem for hydrodynamic Burgers' fluids are obtained as limiting cases of the present solutions. Similar solutions for Stokes' second problem of hydrodynamic Burgers' fluids and those for Newtonian and Oldroyd-B fluids can also be obtained as limiting cases of these solutions.
A. Vandevelde; J.A. Hoogeveen; C.A.J. Hurkens (Cor); J.K. Lenstra (Jan Karel)
2005-01-01
htmlabstractThe multiprocessor flow-shop is the generalization of the flow-shop in which each machine is replaced by a set of identical machines. As finding a minimum-length schedule is NP-hard, we set out to find good lower and upper bounds. The lower bounds are based on relaxation of the
Vandevelde, A.; Hoogeveen, J.A.; Hurkens, C.A.J.; Lenstra, J.K.
2005-01-01
The multiprocessor flow-shop is the generalization of the flow-shop in which each machine is replaced by a set of identical machines. As finding a minimum-length schedule is NP-hard, we set out to find good lower and upper bounds. The lower bounds are based on relaxation of the capacities of all
International Nuclear Information System (INIS)
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2015-01-01
Highlights: • Using high-resolution spatial scheme in solving two-phase flow problems. • Fully implicit time integrations scheme. • Jacobian-free Newton–Krylov method. • Analytical solution for two-phase water faucet problem. - Abstract: The majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many nuclear thermal–hydraulics applications, it is desirable to use higher-order numerical schemes to reduce numerical errors. High-resolution spatial discretization schemes provide high order spatial accuracy in smooth regions and capture sharp spatial discontinuity without nonphysical spatial oscillations. In this work, we adapted an existing high-resolution spatial discretization scheme on staggered grids in two-phase flow applications. Fully implicit time integration schemes were also implemented to reduce numerical errors from operator-splitting types of time integration schemes. The resulting nonlinear system has been successfully solved using the Jacobian-free Newton–Krylov (JFNK) method. The high-resolution spatial discretization and high-order fully implicit time integration numerical schemes were tested and numerically verified for several two-phase test problems, including a two-phase advection problem, a two-phase advection with phase appearance/disappearance problem, and the water faucet problem. Numerical results clearly demonstrated the advantages of using such high-resolution spatial and high-order temporal numerical schemes to significantly reduce numerical diffusion and therefore improve accuracy. Our study also demonstrated that the JFNK method is stable and robust in solving two-phase flow problems, even when phase appearance/disappearance exists
Numerical optimization using flow equations
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
1D and 2D Numerical Modeling for Solving Dam-Break Flow Problems Using Finite Volume Method
Directory of Open Access Journals (Sweden)
Szu-Hsien Peng
2012-01-01
Full Text Available The purpose of this study is to model the flow movement in an idealized dam-break configuration. One-dimensional and two-dimensional motion of a shallow flow over a rigid inclined bed is considered. The resulting shallow water equations are solved by finite volumes using the Roe and HLL schemes. At first, the one-dimensional model is considered in the development process. With conservative finite volume method, splitting is applied to manage the combination of hyperbolic term and source term of the shallow water equation and then to promote 1D to 2D. The simulations are validated by the comparison with flume experiments. Unsteady dam-break flow movement is found to be reasonably well captured by the model. The proposed concept could be further developed to the numerical calculation of non-Newtonian fluid or multilayers fluid flow.
Abdelfatah, Nasri; Brahim, Gasbaoui
2011-01-01
The Reactive power flow’s is one of the most electrical distribution systems problem wich have great of interset of the electrical network researchers, it’s cause’s active power transmission reduction, power losses decreasing, and the drop voltage’s increase. In this research we described the efficiency of the FLC-GAO approach to solve the optimal power flow (OPF) combinatorial problem. The proposed approach employ tow algorithms, Fuzzy logic controller (FLC) algorithm for critical nodal de...
International Nuclear Information System (INIS)
Suzuki, Shunichi; Motoshima, Takayuki; Naemura, Yumi; Kubo, Shin; Kanie, Shunji
2009-01-01
The authors develop a numerical code based on Local Discontinuous Galerkin Method for transient groundwater flow and reactive solute transport problems in order to make it possible to do three dimensional performance assessment on radioactive waste repositories at the earliest stage possible. Local discontinuous Galerkin Method is one of mixed finite element methods which are more accurate ones than standard finite element methods. In this paper, the developed numerical code is applied to several problems which are provided analytical solutions in order to examine its accuracy and flexibility. The results of the simulations show the new code gives highly accurate numeric solutions. (author)
On the maximum-entropy method for kinetic equation of radiation, particle and gas
International Nuclear Information System (INIS)
El-Wakil, S.A.; Madkour, M.A.; Degheidy, A.R.; Machali, H.M.
1995-01-01
The maximum-entropy approach is used to calculate some problems in radiative transfer and reactor physics such as the escape probability, the emergent and transmitted intensities for a finite slab as well as the emergent intensity for a semi-infinite medium. Also, it is employed to solve problems involving spherical geometry, such as luminosity (the total energy emitted by a sphere), neutron capture probability and the albedo problem. The technique is also employed in the kinetic theory of gases to calculate the Poiseuille flow and thermal creep of a rarefied gas between two plates. Numerical calculations are achieved and compared with the published data. The comparisons demonstrate that the maximum-entropy results are good in agreement with the exact ones. (orig.)
Effect of flow conditions on flow accelerated corrosion in pipe bends
International Nuclear Information System (INIS)
Mazhar, H.; Ching, C.Y.
2015-01-01
Flow Accelerated Corrosion (FAC) in piping systems is a safety and reliability problem in the nuclear industry. In this study, the pipe wall thinning rates and development of surface roughness in pipe bends are compared for single phase and two phase annular flow conditions. The FAC rates were measured using the dissolution of test sections cast from gypsum in water with a Schmidt number of 1280. The change in location and levels of maximum FAC under single phase and two phase flow conditions are examined. The comparison of the relative roughness indicates a higher effect for the surface roughness in single phase flow than in two phase flow. (author)
International Nuclear Information System (INIS)
1986-02-01
All published proposals for the deep level burial of radioactive waste recognise that the access shafts, tunnels and boreholes must be sealed, and that the sealing of these openings plays an integral role in the overall isolation of the waste. Previous studies have identified the interface between the host ground formation and the various sealing materials as potential defects in the overall quality of the waste isolation. The significance of groundwater flow at and near the interface has been assessed for representative conditions in generic repository materials. A range of design options to minimise the significance of flow in the interface zone have been proposed, and the most practical of these options have been selected for quantitative analysis. It has been found that isolated high impermeability collars are of limited value unless a highly effective method of minimising ground disturbance during excavation can be developed. It has also been found that control of radionuclide migration by sorptive processes provides an attractive option. The effect of various geometrical arrangements of sorptive materials has been investigated. Consideration has also been given to the particular conditions in the near field, to the behaviour of weak plastic clay host formations and to the mechanical interaction between the backfill material and the host formation. (author)
International Nuclear Information System (INIS)
Bryce, W.M.
1977-10-01
NEA/CSNI Standard Problem 3 consists of the modelling of an experiment on the IETI-1 rig, in which there is initially flow upwards through a feeder, heated section and riser. The inlet and outlet are then closed and a breach opened at the bottom so that the flow reverses and the rig depressurises. Calculations of this problem by many countries using several computer codes have been reported and show a wide spread of results. The purpose of the study reported here was the following. First, to show the sensitivity of the calculation of Standard Problem 3. Second, to perform an ab initio best estimate calculation using the RELAP-UK Mark IV code with the standard recommended options, and third, to use the results of the sensitivity study to show where tuning of the RELAP-UK Mark IV recommended model options was required. This study has shown that the calculation of Standard Problem 3 is sensitive to model assumptions and that the use of the loss-of-coolant accident code RELAP-UK Mk IV with the standard recommended model options predicts the experimental results very well over most of the transient. (U.K.)
Directory of Open Access Journals (Sweden)
Rinto Yusriski
2015-09-01
Full Text Available This research discusses an integer batch scheduling problems for a single-machine with position-dependent batch processing time due to the simultaneous effect of learning and forgetting. The decision variables are the number of batches, batch sizes, and the sequence of the resulting batches. The objective is to minimize total actual flow time, defined as total interval time between the arrival times of parts in all respective batches and their common due date. There are two proposed algorithms to solve the problems. The first is developed by using the Integer Composition method, and it produces an optimal solution. Since the problems can be solved by the first algorithm in a worst-case time complexity O(n2n-1, this research proposes the second algorithm. It is a heuristic algorithm based on the Lagrange Relaxation method. Numerical experiments show that the heuristic algorithm gives outstanding results.
Brickner, Daniel R.; McCombs, Gary B.
2004-01-01
In this article, the authors provide an instructional resource for presenting the indirect method of the statement of cash flows (SCF) in an introductory financial accounting course. The authors focus primarily on presenting a comprehensive example that illustrates the "why" of SCF preparation and show how journal entries and T-accounts can be…
B. Koren (Barry); M.R. Lewis; E.H. van Brummelen (Harald); B. van Leer
2001-01-01
textabstractA finite-volume method is presented for the computation of compressible flows of two immiscible fluids at very different densities. The novel ingredient in the method is a two-fluid linearized Godunov scheme, allowing for flux computations in case of different fluids (e.g., water and
Topics in Bayesian statistics and maximum entropy
International Nuclear Information System (INIS)
Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.
1998-12-01
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
Density estimation by maximum quantum entropy
International Nuclear Information System (INIS)
Silver, R.N.; Wallstrom, T.; Martz, H.F.
1993-01-01
A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets
International Nuclear Information System (INIS)
Le Quere, P.; Weisman, C.; Paillere, H.; Vierendeels, J.; Dick, E.; Becker, R.; Braack, M.; Locke, J.
2005-01-01
Heat transfer by natural convection and conduction in enclosures occurs in numerous practical situations including the cooling of nuclear reactors. For large temperature difference, the flow becomes compressible with a strong coupling between the continuity, the momentum and the energy equations through the equation of state, and its properties (viscosity, heat conductivity) also vary with the temperature, making the Boussinesq flow approximation inappropriate and inaccurate. There are very few reference solutions in the literature on non-Boussinesq natural convection flows. We propose here a test case problem which extends the well-known De Vahl Davis differentially heated square cavity problem to the case of large temperature differences for which the Boussinesq approximation is no longer valid. The paper is split in two parts: in this first part, we propose as yet unpublished reference solutions for cases characterized by a non-dimensional temperature difference of 0.6, Ra 10 6 (constant property and variable property cases) and Ra = 10 7 (variable property case). These reference solutions were produced after a first international workshop organized by Cea and LIMSI in January 2000, in which the above authors volunteered to produce accurate numerical solutions from which the present reference solutions could be established. (authors)
Directory of Open Access Journals (Sweden)
AREF MALEKI-DARONKOLAEI
2013-10-01
Full Text Available This article considers a three-stage assembly flowshop scheduling problem minimizing the weighted sum of mean completion time and makespan with sequence-dependent setup times at the first stage and blocking times between each stage. To tackle such an NP-hard, two meta-heuristic algorithms are presented. The novelty of our approach is to develop a variable neighborhood search algorithm (VNS and a well-known simulated annealing (SA for the problem. Furthermore, to enhance the performance of the (SA, its parameters are optimized by the use of Taguchi method, but to setting parameters of VNS just one parameter has been used without Taguchi. The computational results show that the proposed VNS is better in mean and standard deviation for all sizes of the problem than SA, but on the contrary about CPU Time SA outperforms VNS.
Credal Networks under Maximum Entropy
Lukasiewicz, Thomas
2013-01-01
We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...
Arbogast, Todd
2012-01-01
Motivated by possible generalizations to more complex multiphase multicomponent systems in higher dimensions, we develop an Eulerian-Lagrangian numerical approximation for a system of two conservation laws in one space dimension modeling a simplified two-phase flow problem in a porous medium. The method is based on following tracelines, so it is stable independent of any CFL constraint. The main difficulty is that it is not possible to follow individual tracelines independently. We approximate tracing along the tracelines by using local mass conservation principles and self-consistency. The two-phase flow problem is governed by a system of equations representing mass conservation of each phase, so there are two local mass conservation principles. Our numerical method respects both of these conservation principles over the computational mesh (i.e., locally), and so is a fully conservative traceline method. We present numerical results that demonstrate the ability of the method to handle problems with shocks and rarefactions, and to do so with very coarse spatial grids and time steps larger than the CFL limit. © 2012 Society for Industrial and Applied Mathematics.
CSIR Research Space (South Africa)
Bogaers, Alfred EJ
2016-09-01
Full Text Available the respective outward pointing normals along Γs and Γf if the solid and fluid interfaces are non-matching, Γs 6= Γf . In this paper, the fluid operator F is solved using OpenFOAM where Calculix is used for the structural analysis. The interface load and motion... differ only on the basis of how information from multiple time steps are retained. 4 Test Problems The test problems in the sections to follow have been performed using OpenFOAM [1] for the fluid domain and Calculix [22] for the structural domain...
International Nuclear Information System (INIS)
Bilello, J.C.; Liu, J.M.
1978-01-01
Progress in an investigation of the application of microdynamics and lattice mechanics to the problems in plastic flow and fracture is described. The research program consisted of both theoretical formulations and experimental measurements of a number of intrinsic material parameters in bcc metals and alloys including surface energy, phonon-dispersion curves for dislocated solids, dislocation-point defect interaction energy, slip initiation and microplastic flow behavior. The study has resulted in an improved understanding in the relationship among the experimentally determined fracture surface energy, the intrinsic cohesive energy between atomic planes, and the plastic deformation associated with the initial stages of crack propagation. The values of intrinsic surface energy of tungsten, molybdenum, niobium and niobium-molybdenum alloys, deduced from the measurements, serve as a starting point from which fracture toughness of these materials in engineering service may be intelligently discussed
Directory of Open Access Journals (Sweden)
S. Vignesh
2017-04-01
Full Text Available Flow based Erosion – corrosion problems are very common in fluid handling equipments such as propellers, impellers, pumps in warships, submarine. Though there are many coating materials available to combat erosion–corrosion damage in the above components, iron based amorphous coatings are considered to be more effective to combat erosion–corrosion problems. High velocity oxy-fuel (HVOF spray process is considered to be a better process to coat the iron based amorphous powders. In this investigation, iron based amorphous metallic coating was developed on 316 stainless steel substrate using HVOF spray technique. Empirical relationships were developed to predict the porosity and micro hardness of iron based amorphous coating incorporating HVOF spray parameters such as oxygen flow rate, fuel flow rate, powder feed rate, carrier gas flow rate, and spray distance. Response surface methodology (RSM was used to identify the optimal HVOF spray parameters to attain coating with minimum porosity and maximum hardness.
Hoogeveen, J.A.; Velde, van de S.L.
1998-01-01
We consider a scheduling problem introduced by Ahmadi et al., Batching and scheduling jobs on batch and discrete processors, Operation Research 40 (1992) 750–763, in which each job has to be prepared before it can be processed. The preparation is performed by a batching machine; it can prepare at
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
Energy Technology Data Exchange (ETDEWEB)
Ancalla, Lourdes Pilar Zaragoza
2005-04-15
The reconstruction of the distribution of density of potency pin upright in a heterogeneous combustible element, of the nucleus of a nuclear reactor, it is a subject that has been studied inside by a long time in Physics of Reactors area. Several methods exist to do this reconstruction, one of them is Maximum Entropy's Method, that besides being an optimization method that finds the best solution of all the possible solutions, it is a method also improved that uses multipliers of Lagrange to obtain the distribution of the flows in the faces of the combustible element. This distribution of the flows in the faces is used then as a contour condition in the calculations of a detailed distribution of flow inside the combustible element. In this work, in first place it was made the homogenization of the heterogeneous element. Soon after the factor of the multiplication executes and the medium values of the flow and of the liquid current they are computed, with the program NEM2D. These values medium nodal are, then, used upright in the reconstruction of the distribution pin of the flow inside the combustible element. The obtained results were acceptable, when compared with those obtained using fine mesh. (author)
... Read MoreDepression in Children and TeensRead MoreBMI Calculator Hearing ProblemsLoss in the ability to hear or discriminate ... This flow chart will help direct you if hearing loss is a problem for you or a ...
Liu, Weibo; Jin, Yan; Price, Mark
2016-10-01
A new heuristic based on the Nawaz-Enscore-Ham algorithm is proposed in this article for solving a permutation flow-shop scheduling problem. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion with the objective of minimizing both makespan and machine idle time. Statistical tests illustrate better solution quality of the proposed algorithm compared to existing benchmark heuristics.
Liu, Tianyang; Chan, Hiu Ning; Grimshaw, Roger; Chow, Kwok Wing
2017-11-01
The spatial structure of small disturbances in stratified flows without background shear, usually named the `Taylor-Goldstein equation', is studied by employing the Boussinesq approximation (variation in density ignored except in the buoyancy). Analytical solutions are derived for special wavenumbers when the Brunt-Väisälä frequency is quadratic in hyperbolic secant, by comparison with coupled systems of nonlinear Schrödinger equations intensively studied in the literature. Cases of coupled Schrödinger equations with four, five and six components are utilized as concrete examples. Dispersion curves for arbitrary wavenumbers are obtained numerically. The computations of the group velocity, second harmonic, induced mean flow, and the second derivative of the angular frequency can all be facilitated by these exact linear eigenfunctions of the Taylor-Goldstein equation in terms of hyperbolic function, leading to a cubic Schrödinger equation for the evolution of a wavepacket. The occurrence of internal rogue waves can be predicted if the dispersion and cubic nonlinearity terms of the Schrödinger equations are of the same sign. Partial financial support has been provided by the Research Grants Council contract HKU 17200815.
Directory of Open Access Journals (Sweden)
Christian F. Janßen
2015-07-01
Full Text Available This contribution is dedicated to demonstrating the high potential and manifold applications of state-of-the-art computational fluid dynamics (CFD tools for free-surface flows in civil and environmental engineering. All simulations were performed with the academic research code ELBE (efficient lattice boltzmann environment, http://www.tuhh.de/elbe. The ELBE code follows the supercomputing-on-the-desktop paradigm and is especially designed for local supercomputing, without tedious accesses to supercomputers. ELBE uses graphics processing units (GPU to accelerate the computations and can be used in a single GPU-equipped workstation of, e.g., a design engineer. The code has been successfully validated in very different fields, mostly related to naval architecture and mechanical engineering. In this contribution, we give an overview of past and present applications with practical relevance for civil engineers. The presented applications are grouped into three major categories: (i tsunami simulations, considering wave propagation, wave runup, inundation and debris flows; (ii dam break simulations; and (iii numerical wave tanks for the calculation of hydrodynamic loads on fixed and moving bodies. This broad range of applications in combination with accurate numerical results and very competitive times to solution demonstrates that modern CFD tools in general, and the ELBE code in particular, can be a helpful design tool for civil and environmental engineers.
maximum neutron flux at thermal nuclear reactors
International Nuclear Information System (INIS)
Strugar, P.
1968-10-01
Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr
International Nuclear Information System (INIS)
Kh'yuitt, G.
1980-01-01
An introduction into the problem of two-phase flows is presented. Flow regimes arizing in two-phase flows are described, and classification of these regimes is given. Structures of vertical and horizontal two-phase flows and a method of their identification using regime maps are considered. The limits of this method application are discussed. The flooding phenomena and phenomena of direction change (flow reversal) of the flow and interrelation of these phenomena as well as transitions from slug regime to churn one and from churn one to annular one in vertical flows are described. Problems of phase transitions and equilibrium are discussed. Flow regimes in tubes where evaporating liquid is running, are described [ru
Rosatti, Giorgio; Zugliani, Daniel
2015-03-01
In a two-phase free-surface flow, the transition from a mobile-bed condition to a fixed-bed one (and vice versa) occurs at a sharp interface across which the relevant system of partial differential equations changes abruptly. This leads to the possibility of conceiving a new type of Riemann Problem (RP), which we have called Composite Riemann Problem (CRP), where not only the initial constant values of the variables but also the system of equations change from left to right of a discontinuity. In this paper, we present a strategy for solving a CRP by reducing it to a standard RP of a single, composite system of equations. This can be obtained by combining the two original systems by means of a suitable weighting function, namely the erodibility variable, and the introduction of an appropriate differential equation for this quantity. In this way, the CRP problem can be analyzed theoretically with standard methods, and the features of the solutions can be clearly identified. In particular, a stationary contact wave is able to correctly describe the sharp transition between mobile- and fixed-bed conditions. A finite volume scheme based on the Multiple Averages Generalized Roe approach (Rosatti and Begnudelli (2013) [22]) was used to numerically solve the fixed-mobile CRP. Several test cases demonstrate the effectiveness, exact well balanceness and high accuracy of the scheme when applied to problems that fall within the physical range of applicability of the relevant mathematical model.
Energy Technology Data Exchange (ETDEWEB)
Doisneau, François, E-mail: fdoisne@sandia.gov; Arienti, Marco, E-mail: marient@sandia.gov; Oefelein, Joseph C., E-mail: oefelei@sandia.gov
2017-01-15
For sprays, as described by a kinetic disperse phase model strongly coupled to the Navier–Stokes equations, the resolution strategy is constrained by accuracy objectives, robustness needs, and the computing architecture. In order to leverage the good properties of the Eulerian formalism, we introduce a deterministic particle-based numerical method to solve transport in physical space, which is simple to adapt to the many types of closures and moment systems. The method is inspired by the semi-Lagrangian schemes, developed for Gas Dynamics. We show how semi-Lagrangian formulations are relevant for a disperse phase far from equilibrium and where the particle–particle coupling barely influences the transport; i.e., when particle pressure is negligible. The particle behavior is indeed close to free streaming. The new method uses the assumption of parcel transport and avoids to compute fluxes and their limiters, which makes it robust. It is a deterministic resolution method so that it does not require efforts on statistical convergence, noise control, or post-processing. All couplings are done among data under the form of Eulerian fields, which allows one to use efficient algorithms and to anticipate the computational load. This makes the method both accurate and efficient in the context of parallel computing. After a complete verification of the new transport method on various academic test cases, we demonstrate the overall strategy's ability to solve a strongly-coupled liquid jet with fine spatial resolution and we apply it to the case of high-fidelity Large Eddy Simulation of a dense spray flow. A fuel spray is simulated after atomization at Diesel engine combustion chamber conditions. The large, parallel, strongly coupled computation proves the efficiency of the method for dense, polydisperse, reacting spray flows.
On the effect of standard PFEM remeshing on volume conservation in free-surface fluid flow problems
Franci, Alessandro; Cremonesi, Massimiliano
2017-07-01
The aim of this work is to analyze the remeshing procedure used in the particle finite element method (PFEM) and to investigate how this operation may affect the numerical results. The PFEM remeshing algorithm combines the Delaunay triangulation and the Alpha Shape method to guarantee a good quality of the Lagrangian mesh also in large deformation processes. However, this strategy may lead to local variations of the topology that may cause an artificial change of the global volume. The issue of volume conservation is here studied in detail. An accurate description of all the situations that may induce a volume variation during the PFEM regeneration of the mesh is provided. Moreover, the crucial role of the parameter α used in the Alpha Shape method is highlighted and a range of values of α for which the differences between the numerical results are negligible, is found. Furthermore, it is shown that the variation of volume induced by the remeshing reduces by refining the mesh. This check of convergence is of paramount importance for the reliability of the PFEM. The study is carried out for 2D free-surface fluid dynamics problems, however the conclusions can be extended to 3D and to all those problems characterized by significant variations of internal and external boundaries.
Maximum Parsimony on Phylogenetic networks
2012-01-01
Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are
Babaee, Hessam; Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em
2017-09-01
analyze the performance of the method in the presence of eigenvalue crossing and zero eigenvalues; (ii) stochastic Kovasznay flow: we examine the method in the presence of a singular covariance matrix; and (iii) we examine the adaptivity of the method for an incompressible flow over a cylinder where for large stochastic forcing thirteen DO/BO modes are active.
International Nuclear Information System (INIS)
Yamamoto, Takahisa; Mitachi, Koshi
2004-01-01
This paper performed the transient core analysis of a small Molten Salt Reactor (MSR). The emphasis is that the numerical model employed in this paper takes into account the interaction among fuel salt flow, nuclear reaction and heat transfer. The model consists of two group diffusion equations for fast and thermal neutron fluexs, balance equations for six-group delayed neutron precursors and energy conservation equations for fuel salt and graphite moderator. The results of transient analysis are that (1) fission reaction (heat generation) rate significantly increases soon after step reactivity insertion, e.g., the peak of fission reaction rate achieves about 2.7 times larger than the rated power 350 MW when the reactivity of 0.15% Δk/k 0 is inserted to the rated state, and (2) the self-control performance of the small MSR effectively works under the step reactivity insertion of 0.56% Δk/k 0 , putting the fission reaction rate back on the rated state. (author)
Maximum Profit Configurations of Commercial Engines
Directory of Open Access Journals (Sweden)
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
On some problems of the maximum entropy ansatz
Indian Academy of Sciences (India)
Pilot calculations involving the ground quantum eigenenergy states of the quartic ... well-defined (finite) values for all its moments, e.g. a Lorentzian. Further, in practice, it is .... to test the degree of accuracy of p (m). Here μТ and μТ refer, ...
Directory of Open Access Journals (Sweden)
Nasri Abdelfatah
2011-01-01
Full Text Available The Reactive power flow’s is one of the most electrical distribution systems problem wich have great of interset of the electrical network researchers, it’s cause’s active power transmission reduction, power losses decreasing, and the drop voltage’s increase. In this research we described the efficiency of the FLC-GAO approach to solve the optimal power flow (OPF combinatorial problem. The proposed approach employ tow algorithms, Fuzzy logic controller (FLC algorithm for critical nodal detection and gentic algorithm optimization (GAO algorithm for optimal seizing capacitor.GAO method is more efficient in combinatory problem solutions. The proposed approach has been examined and tested on the standard IEEE 57-bus the resulats show the power loss minimization denhancement, voltage profile, and stability improvement. The proposed approach results have been compared to those that reported in the literature recently. The results are promising and show the effectiveness and robustness of the proposed approach.
Directory of Open Access Journals (Sweden)
Win-Chin Lin
2018-01-01
Full Text Available Two-stage production process and its applications appear in many production environments. Job processing times are usually assumed to be constant throughout the process. In fact, the learning effect accrued from repetitive work experiences, which leads to the reduction of actual job processing times, indeed exists in many production environments. However, the issue of learning effect is rarely addressed in solving a two-stage assembly scheduling problem. Motivated by this observation, the author studies a two-stage three-machine assembly flow shop problem with a learning effect based on sum of the processing times of already processed jobs to minimize the makespan criterion. Because this problem is proved to be NP-hard, a branch-and-bound method embedded with some developed dominance propositions and a lower bound is employed to search for optimal solutions. A cloud theory-based simulated annealing (CSA algorithm and an iterated greedy (IG algorithm with four different local search methods are used to find near-optimal solutions for small and large number of jobs. The performances of adopted algorithms are subsequently compared through computational experiments and nonparametric statistical analyses, including the Kruskal–Wallis test and a multiple comparison procedure.
International Nuclear Information System (INIS)
Kopriva, D.A.
1982-01-01
A numerical scheme has been developed to solve the quasilinear form of the transonic stream function equation. The method is applied to compute steady two-dimensional axisymmetric solar wind-type problems. A single, perfect, non-dissipative, homentropic and polytropic gas-dynamics is assumed. The four equations governing mass and momentum conservation are reduced to a single nonlinear second order partial differential equation for the stream function. Bernoulli's equation is used to obtain a nonlinear algebraic relation for the density in terms of stream function derivatives. The vorticity includes the effects of azimuthal rotation and Bernoulli's function and is determined from quantities specified on boundaries. The approach is efficient. The number of equations and independent variables has been reduced and a rapid relaxation technique developed for the transonic full potential equation is used. Second order accurate central differences are used in elliptic regions. In hyperbolic regions a dissipation term motivated by the rotated differencing scheme of Jameson is added for stability. A successive-line-overrelaxation technique also introduced by Jameson is used to solve the equations. The nonlinear equation for the density is a double valued function of the stream function derivatives. The velocities are extrapolated from upwind points to determine the proper branch and Newton's method is used to iteratively compute the density. This allows accurate solutions with few grid points
Maximum Entropy in Drug Discovery
Directory of Open Access Journals (Sweden)
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Directory of Open Access Journals (Sweden)
Ghiyasvand Mehdi
2016-01-01
Full Text Available In this paper, a new problem on a directed network is presented. Let D be a feasible network such that all arc capacities are equal to U. Given a t > 0, the network D with arc capacities U - t is called the t-network. The goal of the problem is to compute the largest t such that the t-network is feasible. First, we present a weakly polynomial time algorithm to solve this problem, which runs in O(log(nU maximum flow computations, where n is the number of nodes. Then, an O(m2n time approach is presented, where m is the number of arcs. Both the weakly and strongly polynomial algorithms are inspired by McCormick and Ervolina (1994.
Directory of Open Access Journals (Sweden)
Sergey G. Chefranov
2013-11-01
Full Text Available Leonardo da Vinci perhaps was the first who paid attention to the energetic efficiency of existence of vortices emerging near sines of Valsalva and defining normal functioning (opening of aortal valve. However up to now a fundamental problem of defining of mechanisms of mysterious energetic efficiency of functioning of cardio-vascular system (CVS of blood feeding of the organism is still remaining significantly not solved and this is, for example, one of the main restriction for the creation of artificial heart and corresponding valve systems. In the present paper, results witnessing possible important role of the very hydro-mechanical mechanism in the realization of the noted energetic efficiency of CVS due to formation in the CVS of spiral structural organization of the arterial blood flow observed by methods of MRT and color Doppler-measuring in the left ventricular of the heart and in aorta.
Directory of Open Access Journals (Sweden)
Rashidi Mohammad Mehdi
2015-01-01
Full Text Available The similar solution on the equations of the revised Cheng-Minkowycz problem for natural convective boundary layer flow of nanofluid through a porous medium gives (using an analytical method, a system of non-linear partial differential equations which are solved by optimal homotopy analysis method. Effects of various drastic parameters on the fluid and heat transfer characteristics have been analyzed. A very good agreement is observed between the obtained results and the numerical ones. The entropy generation has been derived and a comprehensive parametric analysis on that has been done. Each component of the entropy generation has been analyzed separately and the contribution of each one on the total value of entropy generation has been determined. It is found that the entropy generation as an important aspect of the industrial applications has been affected by various parameters which should be controlled to minimize the entropy generation.
International Nuclear Information System (INIS)
Finley, N.C.; Reeves, M.
1982-03-01
This document contains a series of sample problems and solutions for the Sandia Waste-Isolation Flow and Transport (SWIFT) model developed at Sandia National Laboratories for the Risk Methodology for Geologic Disposal of Radioactive Waste Program (A-1192). With this document and the SWIFT User's Manual, the student may familiarize himself with the code, its capabilities and limitations. When the student has completed this curriculum, he or she should be able to prepare data input for SWIFT and have some insights into interpretation of the model output. This report represents one of a series of self-teaching curricula prepared under a technology transfer contract for the US Nuclear Regulatory Commission, Office of Nuclear Material Safety and Safeguards
Salama, Amgad
2012-06-17
The flow of two immiscible fluids in porous media is ubiquitous particularly in petroleum exploration and extraction. The displacement of one fluid by another immiscible with it represents a very important aspect in what is called enhanced oil recovery. Another example is related to the long-term sequestration of carbon dioxide, CO2 , in deep geologic formations. In this technique, supercritical CO2 is introduced into deep saline aquifer where it displaces the hosting fluid. Furthermore, very important classes of contaminants that are very slightly soluble in water and represent a huge concern if they get introduced to groundwater could basically be assumed immiscible. These are called light non-aqueous phase liquids (LNAPL) and dense non-aqueous phase liquids (DNAPL). All these applications necessitate that efficient algorithms be developed for the numerical solution of these problems. In this work we introduce the use of shifting matrices to numerically solving the problem of two-phase immiscible flows in the subsurface. We implement the cell-center finite difference method which discretizes the governing set of partial differential equations in conservative manner. Unlike traditional solution methodologies, which are based on performing the discretization on a generic cell and solve for all the cells within a loop, in this technique, the cell center information for all the cells are obtained all at once without loops using matrix oriented operations. This technique is significantly faster than the traditional looping algorithms, particularly for larger systems when coding using languages that require repeating interpretation each time a loop is called like Mat Lab, Python and the like. We apply this technique to the transport of LNAPL and DNAPL into a rectangular domain.
Directory of Open Access Journals (Sweden)
Hożejowska Sylwia
2014-03-01
Full Text Available The paper presents application of the nodeless Trefftz method to calculate temperature of the heating foil and the insulating glass pane during continuous flow of a refrigerant along a vertical minichannel. Numerical computations refer to an experiment in which the refrigerant (FC-72 enters under controlled pressure and temperature a rectangular minichannel. Initially its temperature is below the boiling point. During the flow it is heated by a heating foil. The thermosensitive liquid crystals allow to obtain twodimensional temperature field in the foil. Since the nodeless Trefftz method has very good performance for providing solutions to such problems, it was chosen as a numerical method to approximate two-dimensional temperature distribution in the protecting glass and the heating foil. Due to known temperature of the refrigerant it was also possible to evaluate the heat transfer coefficient at the foil-refrigerant interface. For expected improvement of the numerical results the nodeless Trefftz method was combined with adjustment calculus. Adjustment calculus allowed to smooth the measurements and to decrease the measurement errors. As in the case of the measurement errors, the error of the heat transfer coefficient decreased.
Maximum stellar iron core mass
Indian Academy of Sciences (India)
60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore
A portable storage maximum thermometer
International Nuclear Information System (INIS)
Fayart, Gerard.
1976-01-01
A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr
Maximum discharge rate of liquid-vapor mixtures from vessels
International Nuclear Information System (INIS)
Moody, F.J.
1975-09-01
A discrepancy exists in theoretical predictions of the two-phase equilibrium discharge rate from pipes attached to vessels. Theory which predicts critical flow data in terms of pipe exit pressure and quality severely overpredicts flow rates in terms of vessel fluid properties. This study shows that the discrepancy is explained by the flow pattern. Due to decompression and flashing as fluid accelerates into the pipe entrance, the maximum discharge rate from a vessel is limited by choking of a homogeneous bubbly mixture. The mixture tends toward a slip flow pattern as it travels through the pipe, finally reaching a different choked condition at the pipe exit
Maximum entropy decomposition of quadrupole mass spectra
International Nuclear Information System (INIS)
Toussaint, U. von; Dose, V.; Golan, A.
2004-01-01
We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast
Directory of Open Access Journals (Sweden)
Georgii N. Lebedev
2017-01-01
Full Text Available The improvement in the effectiveness of airfield operation largely depends on the problem solving quality on the interaction boundaries of different technological sections. One of such hotspots is the use of the same runway by inbound and outbound aircraft. At certain intensity of outbound and inbound air traffic flow the conflict of aircraft interests appears, where it may be quite difficult to sort out priorities even for experienced controllers, in consequence of which mistakes in decision-making unavoidably appear.In this work the task of response correction of landing and takeoff time of the aircraft using the same RW, in condition of the conflict of interests “arrival – departure” at the increased operating intensity is formulated. The choice of optimal solution is made taking into account mutual interests without the complete sorting and the evaluation of all solutions.Accordingly, the genetic algorithm, which offers a simple and effective approach to optimal control problem solution by providing flight safety at an acceptably high level, is proposed. The estimation of additional aviation fuel consumption is used as optimal choice evaluation criterion.The advantages of the genetic algorithm application at decision-making in comparison with today’s “team” solution of the conflict “departure – arrival” in the airfield area are shown.
Energy Technology Data Exchange (ETDEWEB)
Onarte Yumbla, Pablo Enrique
2008-02-15
The power system optimal power flow (OPF) objective is to obtain a start-up and shut-down schedule of generating units to meet the required demand at minimum production cost, satisfying units' and system's operating constraints, by adjusting the power system control variables. Nowadays, the transmission system can be considered as an independent transmission company that provides open access to all participants. Any pricing scheme should compensate transmission companies fairly for providing transmission services and allocate entire transmissions costs among all transmission users. This thesis uses a transmission pricing scheme using a power flow tracing method to determine the actual contributions of generators to each link flow. Furthermore, the power system must be capable to withstand the loss of any component (e.g., lines, transformers, generators) without jeopardizing the system's operation, guaranteeing its security; such events are often termed probable or credible contingencies, this problem is known as optimal power flow with security constrains (OPF-SC). Additionally, constraints in generating units' limits, minimum and maximum up- and down-time, slope-down and slope-up, voltage profile improved and coupling constraints between the pre- and the post-contingency states and transient stability constraints have been taken into account. A particle swarm optimizer with reconstruction operators (PSO-RO) for solving the OPF-SC is proposed. To handle the constraints of the problem, such reconstruction operators and an external penalty are adopted. The reconstruction operators allow that all particles representing a possible solution satisfy the units' operating constraints, while looking for the optimal solution only within the feasible space, reducing the computing time and improving the quality of the achieved solution. [Spanish] El objetivo del problema de flujos de potencia optimo (FPO) es determinar un programa de arranque y parada
Dynamic Optimization of a Polymer Flooding Process Based on Implicit Discrete Maximum Principle
Directory of Open Access Journals (Sweden)
Yang Lei
2012-01-01
Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and some inequality constraints as polymer concentration and injection amount limitation. The optimal control model is discretized by full implicit finite-difference method. To cope with the discrete optimal control problem (OCP, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s discrete maximum principle. A modified gradient method with new adjoint construction is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.
The worst case complexity of maximum parsimony.
Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal
2014-11-01
One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.
Maximum Water Hammer Sensitivity Analysis
Jalil Emadi; Abbas Solemani
2011-01-01
Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...
Directory of Open Access Journals (Sweden)
Yunfeng Shan
2008-01-01
Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the ﬁnding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reﬂects the phylogenetic relationship among species in comparison.
The Inhibiting Bisection Problem
Energy Technology Data Exchange (ETDEWEB)
Pinar, Ali; Fogel, Yonatan; Lesieutre, Bernard
2006-12-18
Given a graph where each vertex is assigned a generation orconsumption volume, we try to bisect the graph so that each part has asignificant generation/consumption mismatch, and the cutsize of thebisection is small. Our motivation comes from the vulnerability analysisof distribution systems such as the electric power system. We show thatthe constrained version of the problem, where we place either the cutsizeor the mismatch significance as a constraint and optimize the other, isNP-complete, and provide an integer programming formulation. We alsopropose an alternative relaxed formulation, which can trade-off betweenthe two objectives and show that the alternative formulation of theproblem can be solved in polynomial time by a maximum flow solver. Ourexperiments with benchmark electric power systems validate theeffectiveness of our methods.
LCLS Maximum Credible Beam Power
International Nuclear Information System (INIS)
Clendenin, J.
2005-01-01
The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed
Maximum entropy estimation via Gauss-LP quadratures
Thély, Maxime; Sutter, Tobias; Mohajerin Esfahani, P.; Lygeros, John; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri
2017-01-01
We present an approximation method to a class of parametric integration problems that naturally appear when solving the dual of the maximum entropy estimation problem. Our method builds up on a recent generalization of Gauss quadratures via an infinite-dimensional linear program, and utilizes a
Indian Academy of Sciences (India)
problem is important from an experimental point of view, because absorption is always present. ... equal-a-priori probabilities is expressed mathematically by the invariant measure on the matrix space ... the interval between zero and one.
Applying Graph Theory to Problems in Air Traffic Management
Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
Maximum Margin Clustering of Hyperspectral Data
Niazmardi, S.; Safari, A.; Homayouni, S.
2013-09-01
In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.
Extreme Maximum Land Surface Temperatures.
Garratt, J. R.
1992-09-01
There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).
Cao, Jia; Yan, Zheng; He, Guangyu
2016-06-01
This paper introduces an efficient algorithm, multi-objective human learning optimization method (MOHLO), to solve AC/DC multi-objective optimal power flow problem (MOPF). Firstly, the model of AC/DC MOPF including wind farms is constructed, where includes three objective functions, operating cost, power loss, and pollutant emission. Combining the non-dominated sorting technique and the crowding distance index, the MOHLO method can be derived, which involves individual learning operator, social learning operator, random exploration learning operator and adaptive strategies. Both the proposed MOHLO method and non-dominated sorting genetic algorithm II (NSGAII) are tested on an improved IEEE 30-bus AC/DC hybrid system. Simulation results show that MOHLO method has excellent search efficiency and the powerful ability of searching optimal. Above all, MOHLO method can obtain more complete pareto front than that by NSGAII method. However, how to choose the optimal solution from pareto front depends mainly on the decision makers who stand from the economic point of view or from the energy saving and emission reduction point of view.
On minimizing the maximum broadcast decoding delay for instantly decodable network coding
Douik, Ahmed S.; Sorour, Sameh; Alouini, Mohamed-Slim; Ai-Naffouri, Tareq Y.
2014-01-01
In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a
Combining Experiments and Simulations Using the Maximum Entropy Principle
DEFF Research Database (Denmark)
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
Post optimization paradigm in maximum 3-satisfiability logic programming
Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd
2017-08-01
Maximum 3-Satisfiability (MAX-3SAT) is a counterpart of the Boolean satisfiability problem that can be treated as a constraint optimization problem. It deals with a conundrum of searching the maximum number of satisfied clauses in a particular 3-SAT formula. This paper presents the implementation of enhanced Hopfield network in hastening the Maximum 3-Satisfiability (MAX-3SAT) logic programming. Four post optimization techniques are investigated, including the Elliot symmetric activation function, Gaussian activation function, Wavelet activation function and Hyperbolic tangent activation function. The performances of these post optimization techniques in accelerating MAX-3SAT logic programming will be discussed in terms of the ratio of maximum satisfied clauses, Hamming distance and the computation time. Dev-C++ was used as the platform for training, testing and validating our proposed techniques. The results depict the Hyperbolic tangent activation function and Elliot symmetric activation function can be used in doing MAX-3SAT logic programming.
System for memorizing maximum values
Bozeman, Richard J., Jr.
1992-08-01
The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.
Remarks on the maximum luminosity
Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon
2018-04-01
The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.
Scintillation counter, maximum gamma aspect
International Nuclear Information System (INIS)
Thumim, A.D.
1975-01-01
A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)
On semidefinite programming relaxations of maximum k-section
de Klerk, E.; Pasechnik, D.V.; Sotirov, R.; Dobre, C.
2012-01-01
We derive a new semidefinite programming bound for the maximum k -section problem. For k=2 (i.e. for maximum bisection), the new bound is at least as strong as a well-known bound by Poljak and Rendl (SIAM J Optim 5(3):467–487, 1995). For k ≥ 3the new bound dominates a bound of Karisch and Rendl
Direct maximum parsimony phylogeny reconstruction from genotype data
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2007-01-01
Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of ge...
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Directory of Open Access Journals (Sweden)
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Maximum Entropy: Clearing up Mysteries
Directory of Open Access Journals (Sweden)
Marian GrendÃƒÂ¡r
2001-04-01
Full Text Available Abstract: There are several mystifications and a couple of mysteries pertinent to MaxEnt. The mystifications, pitfalls and traps are set up mainly by an unfortunate formulation of Jaynes' die problem, the cause cÃƒÂ©lÃƒÂ¨bre of MaxEnt. After discussing the mystifications a new formulation of the problem is proposed. Then we turn to the mysteries. An answer to the recurring question 'Just what are we accomplishing when we maximize entropy?' [8], based on MaxProb rationale of MaxEnt [6], is recalled. A brief view on the other mystery: 'What is the relation between MaxEnt and the Bayesian method?' [9], in light of the MaxProb rationale of MaxEnt suggests that there is not and cannot be a conflict between MaxEnt and Bayes Theorem.
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
Efficient heuristics for maximum common substructure search.
Englert, Péter; Kovács, Péter
2015-05-26
Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.
Torak, L.J.
1993-01-01
A MODular Finite-Element, digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water-flow. The modular structure of MODFE places the computationally independent tasks that are performed routinely by digital-computer programs simulating ground-water flow into separate subroutines, which are executed from the main program by control statements. Each subroutine consists of complete sets of computations, or modules, which are identified by comment statements, and can be modified by the user without affecting unrelated computations elsewhere in the program. Simulation capabilities can be added or modified by either adding or modifying subroutines that perform specific computational tasks, and the modular-program structure allows the user to create versions of MODFE that contain only the simulation capabilities that pertain to the ground-water problem of interest. MODFE is written in a Fortran programming language that makes it virtually device independent and compatible with desk-top personal computers and large mainframes. MODFE uses computer storage and execution time efficiently by taking advantage of symmetry and sparseness within the coefficient matrices of the finite-element equations. Parts of the matrix coefficients are computed and stored as single-subscripted variables, which are assembled into a complete coefficient just prior to solution. Computer storage is reused during simulation to decrease storage requirements. Descriptions of subroutines that execute the computational steps of the modular-program structure are given in tables that cross reference the subroutines with particular versions of MODFE. Programming details of linear and nonlinear hydrologic terms are provided. Structure diagrams for the main programs show the order in which subroutines are executed for each version and illustrate some of the linear and nonlinear versions of MODFE that are possible. Computational aspects of
Castineira, D.; Jha, B.; Juanes, R.
2016-12-01
Carbon Capture and Sequestration (CCS) is regarded as a promising technology to mitigate rising CO2 concentrations in the atmosphere from industrial emissions. However, as a result of the inherent uncertainty that is present in geological structures, assessing the stability of geological faults and quantifying the potential for induced seismicity is a fundamental challenge for practical implementation of CCS. Here we present a formal framework for the solution of the inverse problem associated with coupled flow and geomechanics models of CO2 injection and subsurface storage. Our approach builds from the application of Gaussian Processes, MCMC and posterior predictive analysis to evaluate relevant earthquake attributes (earthquake time, location and magnitude) in 3D synthetic models of CO2 storage under geologic, observational and operational uncertainty. In our approach, we first conduct hundreds of simulations of a high-fidelity 3D computational model for CO2 injection into a deep saline aquifer, dominated by an anticline structure and a fault. This ensemble of realizations accounts for uncertainty in the model parameters (including fault geomechanical and rock properties) and observations (earthquake time, location and magnitude). We apply Gaussian processes (GP) to generate a valid surrogate that closely approximates the behavior of the high fidelity (and computationally intensive) model, and apply hyperparameter optimization and cross-validation techniques in the solution of this multidimensional data-fit problem. The net result of this process is the generation of a fast model that can be effectively used for Bayesian analysis. We then implement Markov chain Monte Carlo (MCMC) to determine the posterior distribution of the model uncertain parameters (given some prior distributions for those parameters and given the likelihood defined in this case by the GP model). Our results show that the resulting posterior distributions correctly converge towards the "true
Maximum entropy and Bayesian methods
International Nuclear Information System (INIS)
Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.
1992-01-01
Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex; Taylor, Andy
2017-06-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.
Inverse problems of geophysics
International Nuclear Information System (INIS)
Yanovskaya, T.B.
2003-07-01
This report gives an overview and the mathematical formulation of geophysical inverse problems. General principles of statistical estimation are explained. The maximum likelihood and least square fit methods, the Backus-Gilbert method and general approaches for solving inverse problems are discussed. General formulations of linearized inverse problems, singular value decomposition and properties of pseudo-inverse solutions are given
SCEPTIC, Pressure Drop, Flow Rate, Heat Transfer, Temperature in Reactor Heat Exchanger
International Nuclear Information System (INIS)
Kattchee, N.; Reynolds, W.C.
1975-01-01
1 - Nature of physical problem solved: SCEPTIC is a program for calculating pressure drop, flow rates, heat transfer rates, and temperature in heat exchangers such as fuel elements of typical gas or liquid cooled nuclear reactors. The effects of turbulent and heat interchange between flow passages are considered. 2 - Method of solution: The computation procedure amounts to a nodal of lumped parameter type of calculation. The axial mesh size is automatically selected to assure that a prescribed accuracy of results is obtained. 3 - Restrictions on the complexity of the problem: Maximum number of subchannels is 25, maximum number of heated surfaces is 46
Maximum likelihood of phylogenetic networks.
Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir
2006-11-01
Horizontal gene transfer (HGT) is believed to be ubiquitous among bacteria, and plays a major role in their genome diversification as well as their ability to develop resistance to antibiotics. In light of its evolutionary significance and implications for human health, developing accurate and efficient methods for detecting and reconstructing HGT is imperative. In this article we provide a new HGT-oriented likelihood framework for many problems that involve phylogeny-based HGT detection and reconstruction. Beside the formulation of various likelihood criteria, we show that most of these problems are NP-hard, and offer heuristics for efficient and accurate reconstruction of HGT under these criteria. We implemented our heuristics and used them to analyze biological as well as synthetic data. In both cases, our criteria and heuristics exhibited very good performance with respect to identifying the correct number of HGT events as well as inferring their correct location on the species tree. Implementation of the criteria as well as heuristics and hardness proofs are available from the authors upon request. Hardness proofs can also be downloaded at http://www.cs.tau.ac.il/~tamirtul/MLNET/Supp-ML.pdf
Maximum entropy principal for transportation
International Nuclear Information System (INIS)
Bilich, F.; Da Silva, R.
2008-01-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
Hydraulic Limits on Maximum Plant Transpiration
Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.
2011-12-01
Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water
Last Glacial Maximum Salinity Reconstruction
Homola, K.; Spivack, A. J.
2016-12-01
It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were
Bayesian interpretation of Generalized empirical likelihood by maximum entropy
Rochet , Paul
2011-01-01
We study a parametric estimation problem related to moment condition models. As an alternative to the generalized empirical likelihood (GEL) and the generalized method of moments (GMM), a Bayesian approach to the problem can be adopted, extending the MEM procedure to parametric moment conditions. We show in particular that a large number of GEL estimators can be interpreted as a maximum entropy solution. Moreover, we provide a more general field of applications by proving the method to be rob...
Current opinion about maximum entropy methods in Moessbauer spectroscopy
International Nuclear Information System (INIS)
Szymanski, K
2009-01-01
Current opinion about Maximum Entropy Methods in Moessbauer Spectroscopy is presented. The most important advantage offered by the method is the correct data processing under circumstances of incomplete information. Disadvantage is the sophisticated algorithm and its application to the specific problems.
The maximum number of minimal codewords in long codes
DEFF Research Database (Denmark)
Alahmadi, A.; Aldred, R.E.L.; dela Cruz, R.
2013-01-01
Upper bounds on the maximum number of minimal codewords in a binary code follow from the theory of matroids. Random coding provides lower bounds. In this paper, we compare these bounds with analogous bounds for the cycle code of graphs. This problem (in the graphic case) was considered in 1981 by...
Determination of the maximum-depth to potential field sources by a maximum structural index method
Fedi, M.; Florio, G.
2013-01-01
A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.
National Research Council Canada - National Science Library
Shapiro, Howard M
2003-01-01
... ... Conflict: Resolution ... 1.3 Problem Number One: Finding The Cell(s) ... Flow Cytometry: Quick on the Trigger ... The Main Event ... The Pulse Quickens, the Plot Thickens ... 1.4 Flow Cytometry: ...
Modeling and Solving the Liner Shipping Service Selection Problem
DEFF Research Database (Denmark)
Karsten, Christian Vad; Balakrishnan, Anant
We address a tactical planning problem, the Liner Shipping Service Selection Problem (LSSSP), facing container shipping companies. Given estimated demand between various ports, the LSSSP entails selecting the best subset of non-simple cyclic sailing routes from a given pool of candidate routes...... to accurately model transshipment costs and incorporate routing policies such as maximum transit time, maritime cabotage rules, and operational alliances. Our hop-indexed arc flow model is smaller and easier to solve than path flow models. We outline a preprocessing procedure that exploits both the routing...... requirements and the hop limits to reduce problem size, and describe techniques to accelerate the solution procedure. We present computational results for realistic problem instances from the benchmark suite LINER-LIB....
Periaux, J.
1979-01-01
The numerical simulation of the transonic flows of idealized fluids and of incompressible viscous fluids, by the nonlinear least squares methods is presented. The nonlinear equations, the boundary conditions, and the various constraints controlling the two types of flow are described. The standard iterative methods for solving a quasi elliptical nonlinear equation with partial derivatives are reviewed with emphasis placed on two examples: the fixed point method applied to the Gelder functional in the case of compressible subsonic flows and the Newton method used in the technique of decomposition of the lifting potential. The new abstract least squares method is discussed. It consists of substituting the nonlinear equation by a problem of minimization in a H to the minus 1 type Sobolev functional space.
A maximum likelihood framework for protein design
Directory of Open Access Journals (Sweden)
Philippe Hervé
2006-06-01
Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces
Receiver function estimated by maximum entropy deconvolution
Institute of Scientific and Technical Information of China (English)
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum entropy deconvolution of low count nuclear medicine images
International Nuclear Information System (INIS)
McGrath, D.M.
1998-12-01
Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were
Maximum mass of magnetic white dwarfs
International Nuclear Information System (INIS)
Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez
2015-01-01
We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)
Mammographic image restoration using maximum entropy deconvolution
International Nuclear Information System (INIS)
Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R
2004-01-01
An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization
Electrical Discharge Platinum Machining Optimization Using Stefan Problem Solutions
Directory of Open Access Journals (Sweden)
I. B. Stavitskiy
2015-01-01
Full Text Available The article presents the theoretical study results of platinum workability by electrical discharge machining (EDM, based on the solution of the thermal problem of moving the boundary of material change phase, i.e. Stefan problem. The problem solution enables defining the surface melt penetration of the material under the heat flow proceeding from the time of its action and the physical properties of the processed material. To determine the rational EDM operating conditions of platinum the article suggests relating its workability with machinability of materials, for which the rational EDM operating conditions are, currently, defined. It is shown that at low densities of the heat flow corresponding to the finishing EDM operating conditions, the processing conditions used for steel 45 are appropriate for platinum machining; with EDM at higher heat flow densities (e.g. 50 GW / m2 for this purpose copper processing conditions are used; at the high heat flow densities corresponding to heavy roughing EDM it is reasonable to use tungsten processing conditions. The article also represents how the minimum width of the current pulses, at which platinum starts melting and, accordingly, the EDM process becomes possible, depends on the heat flow density. It is shown that the processing of platinum is expedient at a pulse width corresponding to the values, called the effective pulse width. Exceeding these values does not lead to a substantial increase in removal of material per pulse, but considerably reduces the maximum repetition rate and therefore, the EDM capacity. The paper shows the effective pulse width versus the heat flow density. It also presents the dependences of the maximum platinum surface melt penetration and the corresponding pulse width on the heat flow density. Results obtained using solutions of the Stephen heat problem can be used to optimize EDM operating conditions of platinum machining.
Maximum Power from a Solar Panel
Directory of Open Access Journals (Sweden)
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
On an Objective Basis for the Maximum Entropy Principle
Directory of Open Access Journals (Sweden)
David J. Miller
2015-01-01
Full Text Available In this letter, we elaborate on some of the issues raised by a recent paper by Neapolitan and Jiang concerning the maximum entropy (ME principle and alternative principles for estimating probabilities consistent with known, measured constraint information. We argue that the ME solution for the “problematic” example introduced by Neapolitan and Jiang has stronger objective basis, rooted in results from information theory, than their alternative proposed solution. We also raise some technical concerns about the Bayesian analysis in their work, which was used to independently support their alternative to the ME solution. The letter concludes by noting some open problems involving maximum entropy statistical inference.
Network Model for The Problem of Integer Balancing of a Fourdimensional Matrix
Directory of Open Access Journals (Sweden)
A. V. Smirnov
2016-01-01
Full Text Available The problem of integer balancing of a four-dimensional matrix is studied. The elements of the inner part (all four indices are greater than zero of the given real matrix are summed in each direction and each two- and three-dimensional section of the matrix; the total sum is also found. These sums are placed into the elements where one or more indices are equal to zero (according to the summing directions. The problem is to find an integer matrix of the same structure, which can be produced from the initial one by replacing the elements with the largest previous or the smallest following integer. At the same time, the element with four zero indices should be produced with standard rules of rounding - off. In the article the problem of finding the maximum multiple flow in the network of any natural multiplicity is also studied. There are arcs of three types: ordinary arcs, multiple arcs and multi-arcs. Each multiple and multi-arc is a union of linked arcs, which are adjusted with each other. The network constructing rules are described. The definitions of a divisible network and some associated subjects are stated. There are defined the basic principles for reducing the integer balancing problem of an -dimensional matrix ( to the problem of finding the maximum flow in a divisible multiple network of multiplicity . There are stated the rules for reducing the four-dimensional balancing problem to the maximum flow problem in the network of multiplicity 5. The algorithm of finding the maximum flow, which meets the solvability conditions for the integer balancing problem, is formulated for such a network.
DEFF Research Database (Denmark)
Nielsen, Erland Hejn
2003-01-01
of the estimation of the probability of critically delayed delivery beyond a specified threshold value given a certain production batch size and try to establish a relation to certain parameters that can be linked to the degree of regularity of the arrival stream of parts to the job/flow-shop. This last aspect...... relates remotely to the Lean Thinking philosophy that praises the smooth and uninterrupted production flow to be beneficial to the overall operation of productive plants in general, and we will link our findings to this discussion as well....
DEFF Research Database (Denmark)
Nielsen, Erland Hejn
2003-01-01
In this paper we will discuss aspects of the computation of tail-probabilities by simulation in the context of a generic job/flow-shop model consisting of structural elements such as bottle-necks, re-entrance as well as a mixture of these two fundamental types of production complexity and all thi...... relates remotely to the Lean Thinking philosophy that praises the smooth and uninterrupted production flow to be beneficial to the overall operation of productive plants in general, and we will link our findings to this discussion as well....
International Nuclear Information System (INIS)
Cartalade, Alain
2002-01-01
This research thesis concerns the modelling of aquifer flows under the CEA/Cadarache site. The author reports the implementation of a numerical simulation tool adapted to large scale flows in fractured media, and its application to the Cadarache nuclear site. After a description of the site geological and hydrogeological characteristics, the author presents the conceptual model on which the modelling is based, presents the inverse model which allows a better definition of parameters, reports the validation of the inverse approach by means of synthetic and semi-synthetic cases. Then, he reports experiments and simulation of the Cadarache site
DEFF Research Database (Denmark)
Farrell, A P; Steffensen, J F
1987-01-01
The maximum aerobic swimming speed of Chinook salmon (Oncorhynchus tshawytscha) was measured before and after ligation of the coronary artery. Coronary artery ligation prevented blood flow to the compact layer of the ventricular myocardium, which represents 30% of the ventricular mass, and produced...... a statistically significant 35.5% reduction in maximum swimming speed. We conclude that the coronary circulation is important for maximum aerobic swimming and implicit in this conclusion is that maximum cardiac performance is probably necessary for maximum aerobic swimming performance....
International Nuclear Information System (INIS)
Maklakov, D.V.
1995-01-01
A numerical-analytic method of calculating a subcritical flow over an obstruction is proposed. This method is based on the identification of the asymptotics of the behavior of a wave train in unknown functions. The method makes it possible to calculate both steep and long waves. The effectiveness of the method is demonstrated for the problem of flow over a vortex. The concept of the limiting flow regime as a regime with the maximum value of the perturbation parameter for which steady flow still persists is introduced. Various types of the limiting regimes obtained in the calculations are analyzed
What controls the maximum magnitude of injection-induced earthquakes?
Eaton, D. W. S.
2017-12-01
Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum
Jorgensen, Donald G.; Signor, Donald C.; Imes, Jeffrey L.
1989-01-01
Intracell flow is important in modeling cells that contain both sources and sinks. Special attention is needed if recharge through the water table is a source. One method of modeling multiple sources and sinks is to determine the net recharge per cell. For example, for a model cell containing both a sink and recharge through the water table, the amount of recharge should be reduced by the ratio of the area of influence of the sink within the cell to the area of the cell. The reduction is the intercepted portion of the recharge. In a multilayer model this amount is further reduced by a proportion factor, which is a function of the depth of the flow lines from the water table boundary to the internal sink. A gaining section of a stream is a typical sink. The aquifer contribution to a gaining stream can be conceptualized as having two parts; the first part is the intercepted lateral flow from the water table and the second is the flow across the streambed due to differences in head between the water level in the stream and the aquifer below. The amount intercepted is a function of the geometry of the cell, but the amount due to difference in head across the stream bed is largely independent of cell geometry. A discharging well can intercept recharge through the water table within a model cell. The net recharge to the cell would be reduced in proportion to the area of influence of the well within the cell. The area of influence generally changes with time. Thus the amount of intercepted recharge and net recharge may not be constant with time. During periods when the well is not discharging there will be no intercepted recharge even though the area of influence from previous pumping may still exist. The reduction of net recharge per cell due to internal interception of flow will result in a model-calculated mass balance less than the prototype. Additionally the “effective transmissivity” along the intercell flow paths may be altered when flow paths are occupied by
Maximum permissible voltage of YBCO coated conductors
Energy Technology Data Exchange (ETDEWEB)
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Flow conditions of fresh mortar and concrete in different pipes
International Nuclear Information System (INIS)
Jacobsen, Stefan; Haugan, Lars; Hammer, Tor Arne; Kalogiannidis, Evangelos
2009-01-01
The variation in fresh concrete flow rate over the pipe cross section was investigated on differently coloured and highly flowable concrete mixes flowing through pipes of different materials (rubber, steel, acryl). First, uncoloured (gray) concrete was poured through the pipe and the pipe blocked. Similar but coloured (black) concrete was then poured into the pipe filled with gray concrete, flowing after the gray concrete for a while before being blocked and hardened. The advance of the colouring along the pipe wall (showing boundary flow rate) was observed on the moulded concrete surface appearing after removing the pipe from the hardened concrete. The shapes of the interfaces between uncoloured and coloured concrete (showing variation of flow rate over the pipe cross section) were observed on sawn surfaces of concrete half cylinders cut along the length axes of the concrete-filled pipe. Flow profiles over the pipe cross section were clearly seen with maximum flow rates near the centre of the pipe and low flow rate at the pipe wall (typically rubber pipe with reference concrete without silica fume and/or stabilizers). More plug-shaped profiles, with long slip layers and less variation of flow rate over the cross section, were also seen (typically in smooth acrylic pipes). Flow rate, amount of concrete sticking to the wall after flow and SEM-images of pipe surface roughness were observed, illustrating the problem of testing full scale pumping.
Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle
Ge Cheng; Zhenyu Zhang; Moses Ntanda Kyebambe; Nasser Kimbugwe
2016-01-01
Predicting the outcome of National Basketball Association (NBA) matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME) model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that...
A Maximum Principle for SDEs of Mean-Field Type
Energy Technology Data Exchange (ETDEWEB)
Andersson, Daniel, E-mail: danieand@math.kth.se; Djehiche, Boualem, E-mail: boualem@math.kth.se [Royal Institute of Technology, Department of Mathematics (Sweden)
2011-06-15
We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.
A Maximum Principle for SDEs of Mean-Field Type
International Nuclear Information System (INIS)
Andersson, Daniel; Djehiche, Boualem
2011-01-01
We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.
2016-03-01
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
Direct maximum parsimony phylogeny reconstruction from genotype data.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2007-12-05
Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
Revealing the Maximum Strength in Nanotwinned Copper
DEFF Research Database (Denmark)
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Modelling maximum canopy conductance and transpiration in ...
African Journals Online (AJOL)
There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...
The maximum entropy method of moments and Bayesian probability theory
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.