WorldWideScience

Sample records for maximum flow problem

  1. Inverse feasibility problems of the inverse maximum flow problems

    Indian Academy of Sciences (India)

    199–209. c Indian Academy of Sciences. Inverse feasibility problems of the inverse maximum flow problems. ADRIAN DEACONU. ∗ and ELEONOR CIUREA. Department of Mathematics and Computer Science, Faculty of Mathematics and Informatics, Transilvania University of Brasov, Brasov, Iuliu Maniu st. 50,. Romania.

  2. A polynomial time algorithm for solving the maximum flow problem in directed networks

    International Nuclear Information System (INIS)

    Tlas, M.

    2015-01-01

    An efficient polynomial time algorithm for solving maximum flow problems has been proposed in this paper. The algorithm is basically based on the binary representation of capacities; it solves the maximum flow problem as a sequence of O(m) shortest path problems on residual networks with nodes and m arcs. It runs in O(m"2r) time, where is the smallest integer greater than or equal to log B , and B is the largest arc capacity of the network. A numerical example has been illustrated using this proposed algorithm.(author)

  3. Cooperative Strategies for Maximum-Flow Problem in Uncertain Decentralized Systems Using Reliability Analysis

    Directory of Open Access Journals (Sweden)

    Hadi Heidari Gharehbolagh

    2016-01-01

    Full Text Available This study investigates a multiowner maximum-flow network problem, which suffers from risky events. Uncertain conditions effect on proper estimation and ignoring them may mislead decision makers by overestimation. A key question is how self-governing owners in the network can cooperate with each other to maintain a reliable flow. Hence, the question is answered by providing a mathematical programming model based on applying the triangular reliability function in the decentralized networks. The proposed method concentrates on multiowner networks which suffer from risky time, cost, and capacity parameters for each network’s arcs. Some cooperative game methods such as τ-value, Shapley, and core center are presented to fairly distribute extra profit of cooperation. A numerical example including sensitivity analysis and the results of comparisons are presented. Indeed, the proposed method provides more reality in decision-making for risky systems, hence leading to significant profits in terms of real cost estimation when compared with unforeseen effects.

  4. Modelling information flow along the human connectome using maximum flow.

    Science.gov (United States)

    Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung

    2018-01-01

    The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Analytical solution for the problem of maximum exit velocity under Coulomb friction in gravity flow discharge chutes

    Energy Technology Data Exchange (ETDEWEB)

    Salinic, Slavisa [University of Kragujevac, Faculty of Mechanical Engineering, Kraljevo (RS)

    2010-10-15

    In this paper, an analytical solution for the problem of finding profiles of gravity flow discharge chutes required to achieve maximum exit velocity under Coulomb friction is obtained by application of variational calculus. The model of a particle which moves down a rough curve in a uniform gravitational field is used to obtain a solution of the problem for various boundary conditions. The projection sign of the normal reaction force of the rough curve onto the normal to the curve and the restriction requiring that the tangential acceleration be non-negative are introduced as the additional constraints in the form of inequalities. These inequalities are transformed into equalities by introducing new state variables. Although this is fundamentally a constrained variational problem, by further introducing a new functional with an expanded set of unknown functions, it is transformed into an unconstrained problem where broken extremals appear. The obtained equations of the chute profiles contain a certain number of unknown constants which are determined from a corresponding system of nonlinear algebraic equations. The obtained results are compared with the known results from the literature. (orig.)

  6. MAXIMUM PRINCIPLE FOR SUBSONIC FLOW WITH VARIABLE ENTROPY

    Directory of Open Access Journals (Sweden)

    B. Sizykh Grigory

    2017-01-01

    Full Text Available Maximum principle for subsonic flow is fair for stationary irrotational subsonic gas flows. According to this prin- ciple, if the value of the velocity is not constant everywhere, then its maximum is achieved on the boundary and only on the boundary of the considered domain. This property is used when designing form of an aircraft with a maximum critical val- ue of the Mach number: it is believed that if the local Mach number is less than unit in the incoming flow and on the body surface, then the Mach number is less then unit in all points of flow. The known proof of maximum principle for subsonic flow is based on the assumption that in the whole considered area of the flow the pressure is a function of density. For the ideal and perfect gas (the role of diffusion is negligible, and the Mendeleev-Clapeyron law is fulfilled, the pressure is a function of density if entropy is constant in the entire considered area of the flow. Shows an example of a stationary sub- sonic irrotational flow, in which the entropy has different values on different stream lines, and the pressure is not a function of density. The application of the maximum principle for subsonic flow with respect to such a flow would be unreasonable. This example shows the relevance of the question about the place of the points of maximum value of the velocity, if the entropy is not a constant. To clarify the regularities of the location of these points, was performed the analysis of the com- plete Euler equations (without any simplifying assumptions in 3-D case. The new proof of the maximum principle for sub- sonic flow was proposed. This proof does not rely on the assumption that the pressure is a function of density. Thus, it is shown that the maximum principle for subsonic flow is true for stationary subsonic irrotational flows of ideal perfect gas with variable entropy.

  7. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  8. A local search heuristic for the Multi-Commodity k-splittable Maximum Flow Problem

    DEFF Research Database (Denmark)

    Gamst, Mette

    2014-01-01

    , a local search heuristic for solving the problem is proposed. The heuristic is an iterative shortest path procedure on a reduced graph combined with a local search procedure to modify certain path flows and prioritize the different commodities. The heuristic is tested on benchmark instances from...

  9. Wavelength selection in injection-driven Hele-Shaw flows: A maximum amplitude criterion

    Science.gov (United States)

    Dias, Eduardo; Miranda, Jose

    2013-11-01

    As in most interfacial flow problems, the standard theoretical procedure to establish wavelength selection in the viscous fingering instability is to maximize the linear growth rate. However, there are important discrepancies between previous theoretical predictions and existing experimental data. In this work we perform a linear stability analysis of the radial Hele-Shaw flow system that takes into account the combined action of viscous normal stresses and wetting effects. Most importantly, we introduce an alternative selection criterion for which the selected wavelength is determined by the maximum of the interfacial perturbation amplitude. The effectiveness of such a criterion is substantiated by the significantly improved agreement between theory and experiments. We thank CNPq (Brazilian Sponsor) for financial support.

  10. Two- and three-index formulations of the minimum cost multicommodity k-splittable flow problem

    DEFF Research Database (Denmark)

    Gamst, Mette; Jensen, Peter Neergaard; Pisinger, David

    2010-01-01

    The multicommodity flow problem (MCFP) considers the efficient routing of commodities from their origins to their destinations subject to capacity restrictions and edge costs. Baier et al. [G. Baier, E. Köhler, M. Skutella, On the k-splittable flow problem, in: 10th Annual European Symposium...... of commodities has to be satisfied at the lowest possible cost. The problem has applications in transportation problems where a number of commodities must be routed, using a limited number of distinct transportation units for each commodity. Based on a three-index formulation by Truffot et al. [J. Truffot, C...... on Algorithms, 2002, 101–113] introduced the maximum flow multicommodity k-splittable flow problem (MCkFP) where each commodity may use at most k paths between its origin and its destination. This paper studies the -hard minimum cost multicommodity k-splittable flow problem (MCMCkFP) in which a given flow...

  11. Handelman's hierarchy for the maximum stable set problem

    NARCIS (Netherlands)

    Laurent, M.; Sun, Z.

    2014-01-01

    The maximum stable set problem is a well-known NP-hard problem in combinatorial optimization, which can be formulated as the maximization of a quadratic square-free polynomial over the (Boolean) hypercube. We investigate a hierarchy of linear programming relaxations for this problem, based on a

  12. A Maximum Entropy Method for a Robust Portfolio Problem

    Directory of Open Access Journals (Sweden)

    Yingying Xu

    2014-06-01

    Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.

  13. Strongly coupled single-phase flow problems: Effects of density variation, hydrodynamic dispersion, and first order decay

    Energy Technology Data Exchange (ETDEWEB)

    Oldenburg, C.M.; Pruess, K. [Lawrence Berkeley Laboratory, Berkeley, CA (United States)

    1995-03-01

    We have developed TOUGH2 modules for strongly coupled flow and transport that include full hydrodynamic dispersion. T2DM models tow-dimensional flow and transport in systems with variable salinity, while T32DMR includes radionuclide transport with first-order decay of a parent-daughter chain of radionuclide components in variable salinity systems. T2DM has been applied to a variety of coupled flow problems including the pure solutal convection problem of Elder and the mixed free and forced convection salt-dome flow problem. In the Elder and salt-dome flow problems, density changes of up to 20% caused by brine concentration variations lead to strong coupling between the velocity and brine concentration fields. T2DM efficiently calculates flow and transport for these problems. We have applied T2DMR to the dispersive transport and decay of radionuclide tracers in flow fields with permeability heterogeneities and recirculating flows. Coupling in these problems occurs by velocity-dependent hydrodynamic dispersion. Our results show that the maximum daughter species concentration may occur fully within a recirculating or low-velocity region. In all of the problems, we observe very efficient handling of the strongly coupled flow and transport processes.

  14. An Efficient Algorithm for the Maximum Distance Problem

    Directory of Open Access Journals (Sweden)

    Gabrielle Assunta Grün

    2001-12-01

    Full Text Available Efficient algorithms for temporal reasoning are essential in knowledge-based systems. This is central in many areas of Artificial Intelligence including scheduling, planning, plan recognition, and natural language understanding. As such, scalability is a crucial consideration in temporal reasoning. While reasoning in the interval algebra is NP-complete, reasoning in the less expressive point algebra is tractable. In this paper, we explore an extension to the work of Gerevini and Schubert which is based on the point algebra. In their seminal framework, temporal relations are expressed as a directed acyclic graph partitioned into chains and supported by a metagraph data structure, where time points or events are represented by vertices, and directed edges are labelled with < or ≤. They are interested in fast algorithms for determining the strongest relation between two events. They begin by developing fast algorithms for the case where all points lie on a chain. In this paper, we are interested in a generalization of this, namely we consider the problem of finding the maximum ``distance'' between two vertices in a chain ; this problem arises in real world applications such as in process control and crew scheduling. We describe an O(n time preprocessing algorithm for the maximum distance problem on chains. It allows queries for the maximum number of < edges between two vertices to be answered in O(1 time. This matches the performance of the algorithm of Gerevini and Schubert for determining the strongest relation holding between two vertices in a chain.

  15. Topology optimization of flow problems

    DEFF Research Database (Denmark)

    Gersborg, Allan Roulund

    2007-01-01

    This thesis investigates how to apply topology optimization using the material distribution technique to steady-state viscous incompressible flow problems. The target design applications are fluid devices that are optimized with respect to minimizing the energy loss, characteristic properties...... transport in 2D Stokes flow. Using Stokes flow limits the range of applications; nonetheless, the thesis gives a proof-of-concept for the application of the method within fluid dynamic problems and it remains of interest for the design of microfluidic devices. Furthermore, the thesis contributes...... at the Technical University of Denmark. Large topology optimization problems with 2D and 3D Stokes flow modeling are solved with direct and iterative strategies employing the parallelized Sun Performance Library and the OpenMP parallelization technique, respectively....

  16. Low reproducibility of maximum urinary flow rate determined by portable flowmetry

    NARCIS (Netherlands)

    Sonke, G. S.; Kiemeney, L. A.; Verbeek, A. L.; Kortmann, B. B.; Debruyne, F. M.; de la Rosette, J. J.

    1999-01-01

    To evaluate the reproducibility in maximum urinary flow rate (Qmax) in men with lower urinary tract symptoms (LUTSs) and to determine the number of flows needed to obtain a specified reliability in mean Qmax, 212 patients with LUTSs (mean age, 62 years) referred to the University Hospital Nijmegen,

  17. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    Science.gov (United States)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  18. Topology optimization of Channel flow problems

    DEFF Research Database (Denmark)

    Gersborg-Hansen, Allan; Sigmund, Ole; Haber, R. B.

    2005-01-01

    function which measures either some local aspect of the velocity field or a global quantity, such as the rate of energy dissipation. We use the finite element method to model the flow, and we solve the optimization problem with a gradient-based math-programming algorithm that is driven by analytical......This paper describes a topology design method for simple two-dimensional flow problems. We consider steady, incompressible laminar viscous flows at low to moderate Reynolds numbers. This makes the flow problem non-linear and hence a non-trivial extension of the work of [Borrvall&Petersson 2002......]. Further, the inclusion of inertia effects significantly alters the physics, enabling solutions of new classes of optimization problems, such as velocity--driven switches, that are not addressed by the earlier method. Specifically, we determine optimal layouts of channel flows that extremize a cost...

  19. Optimal control problems with delay, the maximum principle and necessary conditions

    NARCIS (Netherlands)

    Frankena, J.F.

    1975-01-01

    In this paper we consider a rather general optimal control problem involving ordinary differential equations with delayed arguments and a set of equality and inequality restrictions on state- and control variables. For this problem a maximum principle is given in pointwise form, using variational

  20. Modeling of the Maximum Entropy Problem as an Optimal Control Problem and its Application to Pdf Estimation of Electricity Price

    Directory of Open Access Journals (Sweden)

    M. E. Haji Abadi

    2013-09-01

    Full Text Available In this paper, the continuous optimal control theory is used to model and solve the maximum entropy problem for a continuous random variable. The maximum entropy principle provides a method to obtain least-biased probability density function (Pdf estimation. In this paper, to find a closed form solution for the maximum entropy problem with any number of moment constraints, the entropy is considered as a functional measure and the moment constraints are considered as the state equations. Therefore, the Pdf estimation problem can be reformulated as the optimal control problem. Finally, the proposed method is applied to estimate the Pdf of the hourly electricity prices of New England and Ontario electricity markets. Obtained results show the efficiency of the proposed method.

  1. A maximum modulus theorem for the Oseen problem

    Czech Academy of Sciences Publication Activity Database

    Kračmar, S.; Medková, Dagmar; Nečasová, Šárka; Varnhorn, W.

    2013-01-01

    Roč. 192, č. 6 (2013), s. 1059-1076 ISSN 0373-3114 R&D Projects: GA ČR(CZ) GAP201/11/1304; GA MŠk LC06052 Institutional research plan: CEZ:AV0Z10190503 Keywords : Oseen problem * maximum modulus theorem * Oseen potentials Subject RIV: BA - General Mathematics Impact factor: 0.909, year: 2013 http://link.springer.com/article/10.1007%2Fs10231-012-0258-x

  2. Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.

    Science.gov (United States)

    Ćwik, Michał; Józefczyk, Jerzy

    2018-01-01

    An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.

  3. Optimal control of algae growth by controlling CO 2 and nutrition flow using Pontryagin Maximum Principle

    Science.gov (United States)

    Mardlijah; Jamil, Ahmad; Hanafi, Lukman; Sanjaya, Suharmadi

    2017-09-01

    There are so many benefit of algae. One of them is using for renewable energy and sustainable in the future. The greater growth of algae will increasing biodiesel production and the increase of algae growth is influenced by glucose, nutrients and photosynthesis process. In this paper, the optimal control problem of the growth of algae is discussed. The objective function is to maximize the concentration of dry algae while the control is the flow of carbon dioxide and the nutrition. The solution is obtained by applying the Pontryagin Maximum Principle. and the result show that the concentration of algae increased more than 15 %.

  4. Maximum production rate optimization for sulphuric acid decomposition process in tubular plug-flow reactor

    International Nuclear Information System (INIS)

    Wang, Chao; Chen, Lingen; Xia, Shaojun; Sun, Fengrui

    2016-01-01

    A sulphuric acid decomposition process in a tubular plug-flow reactor with fixed inlet flow rate and completely controllable exterior wall temperature profile and reactants pressure profile is studied in this paper by using finite-time thermodynamics. The maximum production rate of the aimed product SO 2 and the optimal exterior wall temperature profile and reactants pressure profile are obtained by using nonlinear programming method. Then the optimal reactor with the maximum production rate is compared with the reference reactor with linear exterior wall temperature profile and the optimal reactor with minimum entropy generation rate. The result shows that the production rate of SO 2 of optimal reactor with the maximum production rate has an increase of more than 7%. The optimization of temperature profile has little influence on the production rate while the optimization of reactants pressure profile can significantly increase the production rate. The results obtained may provide some guidelines for the design of real tubular reactors. - Highlights: • Sulphuric acid decomposition process in tubular plug-flow reactor is studied. • Fixed inlet flow rate and controllable temperature and pressure profiles are set. • Maximum production rate of aimed product SO 2 is obtained. • Corresponding optimal temperature and pressure profiles are derived. • Production rate of SO 2 of optimal reactor increases by 7%.

  5. Spectral maximum entropy hydrodynamics of fermionic radiation: a three-moment system for one-dimensional flows

    International Nuclear Information System (INIS)

    Banach, Zbigniew; Larecki, Wieslaw

    2013-01-01

    The spectral formulation of the nine-moment radiation hydrodynamics resulting from using the Boltzmann entropy maximization procedure is considered. The analysis is restricted to the one-dimensional flows of a gas of massless fermions. The objective of the paper is to demonstrate that, for such flows, the spectral nine-moment maximum entropy hydrodynamics of fermionic radiation is not a purely formal theory. We first determine the domains of admissible values of the spectral moments and of the Lagrange multipliers corresponding to them. We then prove the existence of a solution to the constrained entropy optimization problem. Due to the strict concavity of the entropy functional defined on the space of distribution functions, there exists a one-to-one correspondence between the Lagrange multipliers and the moments. The maximum entropy closure of moment equations results in the symmetric conservative system of first-order partial differential equations for the Lagrange multipliers. However, this system can be transformed into the equivalent system of conservation equations for the moments. These two systems are consistent with the additional conservation equation interpreted as the balance of entropy. Exploiting the above facts, we arrive at the differential relations satisfied by the entropy function and the additional function required to close the system of moment equations. We refer to this additional function as the moment closure function. In general, the moment closure and entropy–entropy flux functions cannot be explicitly calculated in terms of the moments determining the state of a gas. Therefore, we develop a perturbation method of calculating these functions. Some additional analytical (and also numerical) results are obtained, assuming that the maximum entropy distribution function tends to the Maxwell–Boltzmann limit. (paper)

  6. Approximation algorithms for the parallel flow shop problem

    NARCIS (Netherlands)

    X. Zhang (Xiandong); S.L. van de Velde (Steef)

    2012-01-01

    textabstractWe consider the NP-hard problem of scheduling n jobs in m two-stage parallel flow shops so as to minimize the makespan. This problem decomposes into two subproblems: assigning the jobs to parallel flow shops; and scheduling the jobs assigned to the same flow shop by use of Johnson's

  7. Maximum a posteriori probability estimates in infinite-dimensional Bayesian inverse problems

    International Nuclear Information System (INIS)

    Helin, T; Burger, M

    2015-01-01

    A demanding challenge in Bayesian inversion is to efficiently characterize the posterior distribution. This task is problematic especially in high-dimensional non-Gaussian problems, where the structure of the posterior can be very chaotic and difficult to analyse. Current inverse problem literature often approaches the problem by considering suitable point estimators for the task. Typically the choice is made between the maximum a posteriori (MAP) or the conditional mean (CM) estimate. The benefits of either choice are not well-understood from the perspective of infinite-dimensional theory. Most importantly, there exists no general scheme regarding how to connect the topological description of a MAP estimate to a variational problem. The recent results by Dashti and others (Dashti et al 2013 Inverse Problems 29 095017) resolve this issue for nonlinear inverse problems in Gaussian framework. In this work we improve the current understanding by introducing a novel concept called the weak MAP (wMAP) estimate. We show that any MAP estimate in the sense of Dashti et al (2013 Inverse Problems 29 095017) is a wMAP estimate and, moreover, how the wMAP estimate connects to a variational formulation in general infinite-dimensional non-Gaussian problems. The variational formulation enables to study many properties of the infinite-dimensional MAP estimate that were earlier impossible to study. In a recent work by the authors (Burger and Lucka 2014 Maximum a posteriori estimates in linear inverse problems with logconcave priors are proper bayes estimators preprint) the MAP estimator was studied in the context of the Bayes cost method. Using Bregman distances, proper convex Bayes cost functions were introduced for which the MAP estimator is the Bayes estimator. Here, we generalize these results to the infinite-dimensional setting. Moreover, we discuss the implications of our results for some examples of prior models such as the Besov prior and hierarchical prior. (paper)

  8. 3D Topology optimization of Stokes flow problems

    DEFF Research Database (Denmark)

    Gersborg-Hansen, Allan; Dammann, Bernd

    of energy efficient devices for 2D Stokes flow. Creeping flow problems are described by the Stokes equations which model very viscous fluids at macro scales or ordinary fluids at very small scales. The latter gives the motivation for topology optimization problems based on the Stokes equations being a model......The present talk is concerned with the application of topology optimization to creeping flow problems in 3D. This research is driven by the fact that topology optimization has proven very successful as a tool in academic and industrial design problems. Success stories are reported from such diverse...

  9. Problems in fluid flow

    International Nuclear Information System (INIS)

    Brasch, D.J.

    1986-01-01

    Chemical and mineral engineering students require texts which give guidance to problem solving to complement their main theoretical texts. This book has a broad coverage of the fluid flow problems which these students may encounter. The fundamental concepts and the application of the behaviour of liquids and gases in unit operation are dealt with. The book is intended to give numerical practice; development of theory is undertaken only when elaboration of treatments available in theoretical texts is absolutely necessary

  10. Research on configuration of railway self-equipped tanker based on minimum cost maximum flow model

    Science.gov (United States)

    Yang, Yuefang; Gan, Chunhui; Shen, Tingting

    2017-05-01

    In the study of the configuration of the tanker of chemical logistics park, the minimum cost maximum flow model is adopted. Firstly, the transport capacity of the park loading and unloading area and the transportation demand of the dangerous goods are taken as the constraint condition of the model; then the transport arc capacity, the transport arc flow and the transport arc edge weight are determined in the transportation network diagram; finally, the software calculations. The calculation results show that the configuration issue of the tankers can be effectively solved by the minimum cost maximum flow model, which has theoretical and practical application value for tanker management of railway transportation of dangerous goods in the chemical logistics park.

  11. Lattice Field Theory with the Sign Problem and the Maximum Entropy Method

    Directory of Open Access Journals (Sweden)

    Masahiro Imachi

    2007-02-01

    Full Text Available Although numerical simulation in lattice field theory is one of the most effective tools to study non-perturbative properties of field theories, it faces serious obstacles coming from the sign problem in some theories such as finite density QCD and lattice field theory with the θ term. We reconsider this problem from the point of view of the maximum entropy method.

  12. Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems

    International Nuclear Information System (INIS)

    Stipanović, Dušan M.; Tomlin, Claire J.; Leitmann, George

    2012-01-01

    In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.

  13. Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems

    Energy Technology Data Exchange (ETDEWEB)

    Stipanovic, Dusan M., E-mail: dusan@illinois.edu [University of Illinois at Urbana-Champaign, Coordinated Science Laboratory, Department of Industrial and Enterprise Systems Engineering (United States); Tomlin, Claire J., E-mail: tomlin@eecs.berkeley.edu [University of California at Berkeley, Department of Electrical Engineering and Computer Science (United States); Leitmann, George, E-mail: gleit@berkeley.edu [University of California at Berkeley, College of Engineering (United States)

    2012-12-15

    In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.

  14. Should measurement of maximum urinary flow rate and residual urine volume be a part of a "minimal care" assessment programme in female incontinence?

    DEFF Research Database (Denmark)

    Sander, Pia; Mouritsen, L; Andersen, J Thorup

    2002-01-01

    OBJECTIVE: The aim of this study was to evaluate the value of routine measurements of urinary flow rate and residual urine volume as a part of a "minimal care" assessment programme for women with urinary incontinence in detecting clinical significant bladder emptying problems. MATERIAL AND METHODS....... Twenty-six per cent had a maximum flow rate less than 15 ml/s, but only 4% at a voided volume > or =200 ml. Residual urine more than 149 ml was found in 6%. Two women had chronic retention with overflow incontinence. Both had typical symptoms with continuous leakage, stranguria and chronic cystitis...

  15. Monodimensional estimation of maximum Reynolds shear stress in the downstream flow field of bileaflet valves.

    Science.gov (United States)

    Grigioni, Mauro; Daniele, Carla; D'Avenio, Giuseppe; Barbaro, Vincenzo

    2002-05-01

    Turbulent flow generated by prosthetic devices at the bloodstream level may cause mechanical stress on blood particles. Measurement of the Reynolds stress tensor and/or some of its components is a mandatory step to evaluate the mechanical load on blood components exerted by fluid stresses, as well as possible consequent blood damage (hemolysis or platelet activation). Because of the three-dimensional nature of turbulence, in general, a three-component anemometer should be used to measure all components of the Reynolds stress tensor, but this is difficult, especially in vivo. The present study aimed to derive the maximum Reynolds shear stress (RSS) in three commercially available prosthetic heart valves (PHVs) of wide diffusion, starting with monodimensional data provided in vivo by echo Doppler. Accurate measurement of PHV flow field was made using laser Doppler anemometry; this provided the principal turbulence quantities (mean velocity, root-mean-square value of velocity fluctuations, average value of cross-product of velocity fluctuations in orthogonal directions) needed to quantify the maximum turbulence-related shear stress. The recorded data enabled determination of the relationship, the Reynolds stresses ratio (RSR) between maximum RSS and Reynolds normal stress in the main flow direction. The RSR was found to be dependent upon the local structure of the flow field. The reported RSR profiles, which permit a simple calculation of maximum RSS, may prove valuable during the post-implantation phase, when an assessment of valve function is made echocardiographically. Hence, the risk of damage to blood constituents associated with bileaflet valve implantation may be accurately quantified in vivo.

  16. Maximum flow approach to prioritize potential drug targets of Mycobacterium tuberculosis H37Rv from protein-protein interaction network.

    Science.gov (United States)

    Melak, Tilahun; Gakkhar, Sunita

    2015-12-01

    In spite of the implementations of several strategies, tuberculosis (TB) is overwhelmingly a serious global public health problem causing millions of infections and deaths every year. This is mainly due to the emergence of drug-resistance varieties of TB. The current treatment strategies for the drug-resistance TB are of longer duration, more expensive and have side effects. This highlights the importance of identification and prioritization of targets for new drugs. This study has been carried out to prioritize potential drug targets of Mycobacterium tuberculosis H37Rv based on their flow to resistance genes. The weighted proteome interaction network of the pathogen was constructed using a dataset from STRING database. Only a subset of the dataset with interactions that have a combined score value ≥770 was considered. Maximum flow approach has been used to prioritize potential drug targets. The potential drug targets were obtained through comparative genome and network centrality analysis. The curated set of resistance genes was retrieved from literatures. Detail literature review and additional assessment of the method were also carried out for validation. A list of 537 proteins which are essential to the pathogen and non-homologous with human was obtained from the comparative genome analysis. Through network centrality measures, 131 of them were found within the close neighborhood of the centre of gravity of the proteome network. These proteins were further prioritized based on their maximum flow value to resistance genes and they are proposed as reliable drug targets of the pathogen. Proteins which interact with the host were also identified in order to understand the infection mechanism. Potential drug targets of Mycobacterium tuberculosis H37Rv were successfully prioritized based on their flow to resistance genes of existing drugs which is believed to increase the druggability of the targets since inhibition of a protein that has a maximum flow to

  17. Efficient bounding schemes for the two-center hybrid flow shop scheduling problem with removal times.

    Science.gov (United States)

    Hidri, Lotfi; Gharbi, Anis; Louly, Mohamed Aly

    2014-01-01

    We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures.

  18. A note on Fenchel cuts for the single-node flow problem

    DEFF Research Database (Denmark)

    Klose, Andreas

    The single-node flow problem, which is also known as the single-sink fixed-charge transportation problem, consists in finding a minimum cost flow from a number of nodes to a single sink. The flow cost comprise an amount proportional to the quantity shipped as well as a fixed charge. In this note......, some structural properties of Fenchel cutting planes for this problem are described. Such cuts might then be applied for solving, e.g., fixed-charge transportation problems and more general fixed-charge network flow problems....

  19. Dynamic Flow Management Problems in Air Transportation

    Science.gov (United States)

    Patterson, Sarah Stock

    1997-01-01

    In 1995, over six hundred thousand licensed pilots flew nearly thirty-five million flights into over eighteen thousand U.S. airports, logging more than 519 billion passenger miles. Since demand for air travel has increased by more than 50% in the last decade while capacity has stagnated, congestion is a problem of undeniable practical significance. In this thesis, we will develop optimization techniques that reduce the impact of congestion on the national airspace. We start by determining the optimal release times for flights into the airspace and the optimal speed adjustment while airborne taking into account the capacitated airspace. This is called the Air Traffic Flow Management Problem (TFMP). We address the complexity, showing that it is NP-hard. We build an integer programming formulation that is quite strong as some of the proposed inequalities are facet defining for the convex hull of solutions. For practical problems, the solutions of the LP relaxation of the TFMP are very often integral. In essence, we reduce the problem to efficiently solving large scale linear programming problems. Thus, the computation times are reasonably small for large scale, practical problems involving thousands of flights. Next, we address the problem of determining how to reroute aircraft in the airspace system when faced with dynamically changing weather conditions. This is called the Air Traffic Flow Management Rerouting Problem (TFMRP) We present an integrated mathematical programming approach for the TFMRP, which utilizes several methodologies, in order to minimize delay costs. In order to address the high dimensionality, we present an aggregate model, in which we formulate the TFMRP as a multicommodity, integer, dynamic network flow problem with certain side constraints. Using Lagrangian relaxation, we generate aggregate flows that are decomposed into a collection of flight paths using a randomized rounding heuristic. This collection of paths is used in a packing integer

  20. Topology Optimization of Large Scale Stokes Flow Problems

    DEFF Research Database (Denmark)

    Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan

    2008-01-01

    This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....

  1. Advances in multiphase flow and related problems

    International Nuclear Information System (INIS)

    Papanicolaou, G.

    1986-01-01

    Proceedings of a workshop in multiphase flow held at Leesburg, Va. in June 1986 representing a cross-disciplinary approach to theoretical as well as computational problems in multiphase flow. Topics include composites, phase transitions, fluid-particle systems, and bubbly liquids

  2. Generalized Riemann problem for reactive flows

    International Nuclear Information System (INIS)

    Ben-Artzi, M.

    1989-01-01

    A generalized Riemann problem is introduced for the equations of reactive non-viscous compressible flow in one space dimension. Initial data are assumed to be linearly distributed on both sides of a jump discontinuity. The resolution of the singularity is studied and the first-order variation (in time) of flow variables is given in exact form. copyright 1989 Academic Press, Inc

  3. Estimating Probable Maximum Precipitation by Considering Combined Effect of Typhoon and Southwesterly Air Flow

    Directory of Open Access Journals (Sweden)

    Cheng-Chin Liu

    2016-01-01

    Full Text Available Typhoon Morakot hit southern Taiwan in 2009, bringing 48-hr of heavy rainfall [close to the Probable Maximum Precipitation (PMP] to the Tsengwen Reservoir catchment. This extreme rainfall event resulted from the combined (co-movement effect of two climate systems (i.e., typhoon and southwesterly air flow. Based on the traditional PMP estimation method (i.e., the storm transposition method, STM, two PMP estimation approaches, i.e., Amplification Index (AI and Independent System (IS approaches, which consider the combined effect are proposed in this work. The AI approach assumes that the southwesterly air flow precipitation in a typhoon event could reach its maximum value. The IS approach assumes that the typhoon and southwesterly air flow are independent weather systems. Based on these assumptions, calculation procedures for the two approaches were constructed for a case study on the Tsengwen Reservoir catchment. The results show that the PMP estimates for 6- to 60-hr durations using the two approaches are approximately 30% larger than the PMP estimates using the traditional STM without considering the combined effect. This work is a pioneer PMP estimation method that considers the combined effect of a typhoon and southwesterly air flow. Further studies on this issue are essential and encouraged.

  4. Maximum Principles and Boundary Value Problems for First-Order Neutral Functional Differential Equations

    Directory of Open Access Journals (Sweden)

    Domoshnitsky Alexander

    2009-01-01

    Full Text Available We obtain the maximum principles for the first-order neutral functional differential equation where , and are linear continuous operators, and are positive operators, is the space of continuous functions, and is the space of essentially bounded functions defined on . New tests on positivity of the Cauchy function and its derivative are proposed. Results on existence and uniqueness of solutions for various boundary value problems are obtained on the basis of the maximum principles.

  5. Flows in networks under fuzzy conditions

    CERN Document Server

    Bozhenyuk, Alexander Vitalievich; Kacprzyk, Janusz; Rozenberg, Igor Naymovich

    2017-01-01

    This book offers a comprehensive introduction to fuzzy methods for solving flow tasks in both transportation and networks. It analyzes the problems of minimum cost and maximum flow finding with fuzzy nonzero lower flow bounds, and describes solutions to minimum cost flow finding in a network with fuzzy arc capacities and transmission costs. After a concise introduction to flow theory and tasks, the book analyzes two important problems. The first is related to determining the maximum volume for cargo transportation in the presence of uncertain network parameters, such as environmental changes, measurement errors and repair work on the roads. These parameters are represented here as fuzzy triangular, trapezoidal numbers and intervals. The second problem concerns static and dynamic flow finding in networks under fuzzy conditions, and an effective method that takes into account the network’s transit parameters is presented here. All in all, the book provides readers with a practical reference guide to state-of-...

  6. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  7. Heuristics for no-wait flow shop scheduling problem

    Directory of Open Access Journals (Sweden)

    Kewal Krishan Nailwal

    2016-09-01

    Full Text Available No-wait flow shop scheduling refers to continuous flow of jobs through different machines. The job once started should have the continuous processing through the machines without wait. This situation occurs when there is a lack of an intermediate storage between the processing of jobs on two consecutive machines. The problem of no-wait with the objective of minimizing makespan in flow shop scheduling is NP-hard; therefore the heuristic algorithms are the key to solve the problem with optimal solution or to approach nearer to optimal solution in simple manner. The paper describes two heuristics, one constructive and an improvement heuristic algorithm obtained by modifying the constructive one for sequencing n-jobs through m-machines in a flow shop under no-wait constraint with the objective of minimizing makespan. The efficiency of the proposed heuristic algorithms is tested on 120 Taillard’s benchmark problems found in the literature against the NEH under no-wait and the MNEH heuristic for no-wait flow shop problem. The improvement heuristic outperforms all heuristics on the Taillard’s instances by improving the results of NEH by 27.85%, MNEH by 22.56% and that of the proposed constructive heuristic algorithm by 24.68%. To explain the computational process of the proposed algorithm, numerical illustrations are also given in the paper. Statistical tests of significance are done in order to draw the conclusions.

  8. Clinical evaluation of a simple uroflowmeter for categorization of maximum urinary flow rate

    Directory of Open Access Journals (Sweden)

    Simon Pridgeon

    2007-01-01

    Full Text Available Objective: To evaluate the accuracy and diagnostic usefulness of a disposable flowmeter consisting of a plastic funnel with a spout divided into three chambers. Materials and Methods: Men with lower urinary tract symptoms (LUTS voided sequentially into a standard flowmeter and the funnel device recording maximum flow rate (Q max and voided volume (V void . The device was precalibrated such that filling of the bottom, middle and top chambers categorized maximum input flows as 15 ml s -1 respectively. Subjects who agreed to use the funnel device at home obtained readings of flow category and V void twice daily for seven days. Results: A single office reading in 46 men using the device showed good agreement with standard measurement of Q max for V void > 150 ml (Kappa = 0.68. All 14 men whose void reached the top chamber had standard Q max > 15 ml s -1 (PPV = 100%, NPV = 72% whilst eight of 12 men whose void remained in the bottom chamber had standard Q max < 10 ml s -1 (PPV = 70%, NPV = 94%. During multiple home use by 14 men the device showed moderate repeatability (Kappa = 0.58 and correctly categorized Q max in comparison to standard measurement for 12 (87% men. Conclusions: This study suggests that the device has sufficient accuracy and reliability for initial flow rate assessment in men with LUTS. The device can provide a single measurement or alternatively multiple home measurements to categorize men with Q max < 15 ml s -1 .

  9. Scalable Newton-Krylov solver for very large power flow problems

    NARCIS (Netherlands)

    Idema, R.; Lahaye, D.J.P.; Vuik, C.; Van der Sluis, L.

    2010-01-01

    The power flow problem is generally solved by the Newton-Raphson method with a sparse direct solver for the linear system of equations in each iteration. While this works fine for small power flow problems, we will show that for very large problems the direct solver is very slow and we present

  10. Measurement of the temperature of density maximum of water solutions using a convective flow technique

    OpenAIRE

    Cawley, M.F.; McGlynn, D.; Mooney, P.A.

    2006-01-01

    A technique is described which yields an accurate measurement of the temperature of density maximum of fluids which exhibit such anomalous behaviour. The method relies on the detection of changes in convective flow in a rectangular cavity containing the test fluid.The normal single-cell convection which occurs in the presence of a horizontal temperature gradient changes to a double cell configuration in the vicinity of the density maximum, and this transition manifests itself in changes in th...

  11. Flow area optimization in point to area or area to point flows

    International Nuclear Information System (INIS)

    Ghodoossi, Lotfollah; Egrican, Niluefer

    2003-01-01

    This paper deals with the constructal theory of generation of shape and structure in flow systems connecting one point to a finite size area. The flow direction may be either from the point to the area or the area to the point. The formulation of the problem remains the same if the flow direction is reversed. Two models are used in optimization of the point to area or area to point flow problem: cost minimization and revenue maximization. The cost minimization model enables one to predict the shape of the optimized flow areas, but the geometric sizes of the flow areas are not predictable. That is, as an example, if the area of flow is a rectangle with a fixed area size, optimization of the point to area or area to point flow problem by using the cost minimization model will only predict the height/length ratio of the rectangle not the height and length itself. By using the revenue maximization model in optimization of the flow problems, all optimized geometric aspects of the interested flow areas will be derived as well. The aim of this paper is to optimize the point to area or area to point flow problems in various elemental flow area shapes and various structures of the flow system (various combinations of elemental flow areas) by using the revenue maximization model. The elemental flow area shapes used in this paper are either rectangular or triangular. The forms of the flow area structure, made up of an assembly of optimized elemental flow areas to obtain bigger flow areas, are rectangle-in-rectangle, rectangle-in-triangle, triangle-in-triangle and triangle-in-rectangle. The global maximum revenue, revenue collected per unit flow area and the shape and sizes of each flow area structure have been derived in optimized conditions. The results for each flow area structure have been compared with the results of the other structures to determine the structure that provides better performance. The conclusion is that the rectangle-in-triangle flow area structure

  12. Characteristics-based modelling of flow problems

    International Nuclear Information System (INIS)

    Saarinen, M.

    1994-02-01

    The method of characteristics is an exact way to proceed to the solution of hyperbolic partial differential equations. The numerical solutions, however, are obtained in the fixed computational grid where interpolations of values between the mesh points cause numerical errors. The Piecewise Linear Interpolation Method, PLIM, the utilization of which is based on the method of characteristics, has been developed to overcome these deficiencies. The thesis concentrates on the computer simulation of the two-phase flow. The main topics studied are: (1) the PLIM method has been applied to study the validity of the numerical scheme through solving various flow problems to achieve knowledge for the further development of the method, (2) the mathematical and physical validity and applicability of the two-phase flow equations based on the SFAV (Separation of the two-phase Flow According to Velocities) approach has been studied, and (3) The SFAV approach has been further developed for particular cases such as stratified horizontal two-phase flow. (63 refs., 4 figs.)

  13. Using a genetic algorithm to solve fluid-flow problems

    International Nuclear Information System (INIS)

    Pryor, R.J.

    1990-01-01

    Genetic algorithms are based on the mechanics of the natural selection and natural genetics processes. These algorithms are finding increasing application to a wide variety of engineering optimization and machine learning problems. In this paper, the authors demonstrate the use of a genetic algorithm to solve fluid flow problems. Specifically, the authors use the algorithm to solve the one-dimensional flow equations for a pipe

  14. Inverse feasibility problems of the inverse maximum flow problems

    Indian Academy of Sciences (India)

    Author Affiliations. Adrian Deaconu1 Eleonor Ciurea1. Department of Mathematics and Computer Science, Faculty of Mathematics and Informatics, Transilvania University of Bra¸sov, Bra¸sov, Iuliu Maniu st. 50, Romania ...

  15. Inverse feasibility problems of the inverse maximum flow problems

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... Author Affiliations. Adrian Deaconu1 Eleonor Ciurea1. Department of Mathematics and Computer Science, Faculty of Mathematics and Informatics, Transilvania University of Bra¸sov, Bra¸sov, Iuliu Maniu st. 50, Romania ...

  16. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    International Nuclear Information System (INIS)

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-01-01

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach

  17. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  18. Chronological trends in maximum and minimum water flows of the Teesta River, Bangladesh, and its implications

    Directory of Open Access Journals (Sweden)

    Md. Sanaul H. Mondal

    2017-03-01

    Full Text Available Bangladesh shares a common border with India in the west, north and east and with Myanmar in the southeast. These borders cut across 57 rivers that discharge through Bangladesh into the Bay of Bengal in the south. The upstream courses of these rivers traverse India, China, Nepal and Bhutan. Transboundary flows are the important sources of water resources in Bangladesh. Among the 57 transboundary rivers, the Teesta is the fourth major river in Bangladesh after the Ganges, the Brahmaputra and the Meghna and Bangladesh occupies about 2071 km2 . The Teesta River floodplain in Bangladesh accounts for 14% of the total cropped area and 9.15 million people of the country. The objective of this study was to investigate trends in both maximum and minimum water flow at Kaunia and Dalia stations for the Teesta River and the coping strategies developed by the communities to adjust with uncertain flood situations. The flow characteristics of the Teesta were analysed by calculating monthly maximum and minimum water levels and discharges from 1985 to 2006. Discharge of the Teesta over the last 22 years has been decreasing. Extreme low-flow conditions were likely to occur more frequently after the implementation of the Gozoldoba Barrage by India. However, a very sharp decrease in peak flows was also observed albeit unexpected high discharge in 1988, 1989, 1991, 1997, 1999 and 2004 with some in between April and October. Onrush of water causes frequent flash floods, whereas decreasing flow leaves the areas dependent on the Teesta vulnerable to droughts. Both these extreme situations had a negative impact on the lives and livelihoods of people dependent on the Teesta. Over the years, people have developed several risk mitigation strategies to adjust with both natural and anthropogenic flood situations. This article proposed the concept of ‘MAXIN (maximum and minimum flows’ for river water justice for riparian land.

  19. Reynolds analogy for the Rayleigh problem at various flow modes.

    Science.gov (United States)

    Abramov, A A; Butkovskii, A V

    2016-07-01

    The Reynolds analogy and the extended Reynolds analogy for the Rayleigh problem are considered. For a viscous incompressible fluid we derive the Reynolds analogy as a function of the Prandtl number and the Eckert number. We show that for any positive Eckert number, the Reynolds analogy as a function of the Prandtl number has a maximum. For a monatomic gas in the transitional flow regime, using the direct simulation Monte Carlo method, we investigate the extended Reynolds analogy, i.e., the relation between the shear stress and the energy flux transferred to the boundary surface, at different velocities and temperatures. We find that the extended Reynolds analogy for a rarefied monatomic gas flow with the temperature of the undisturbed gas equal to the surface temperature depends weakly on time and is close to 0.5. We show that at any fixed dimensionless time the extended Reynolds analogy depends on the plate velocity and temperature and undisturbed gas temperature mainly via the Eckert number. For Eckert numbers of the order of unity or less we generalize an extended Reynolds analogy. The generalized Reynolds analogy depends mainly only on dimensionless time for all considered Eckert numbers of the order of unity or less.

  20. Probabilistic properties of the date of maximum river flow, an approach based on circular statistics in lowland, highland and mountainous catchment

    Science.gov (United States)

    Rutkowska, Agnieszka; Kohnová, Silvia; Banasik, Kazimierz

    2018-04-01

    Probabilistic properties of dates of winter, summer and annual maximum flows were studied using circular statistics in three catchments differing in topographic conditions; a lowland, highland and mountainous catchment. The circular measures of location and dispersion were used in the long-term samples of dates of maxima. The mixture of von Mises distributions was assumed as the theoretical distribution function of the date of winter, summer and annual maximum flow. The number of components was selected on the basis of the corrected Akaike Information Criterion and the parameters were estimated by means of the Maximum Likelihood method. The goodness of fit was assessed using both the correlation between quantiles and a version of the Kuiper's and Watson's test. Results show that the number of components varied between catchments and it was different for seasonal and annual maxima. Differences between catchments in circular characteristics were explained using climatic factors such as precipitation and temperature. Further studies may include circular grouping catchments based on similarity between distribution functions and the linkage between dates of maximum precipitation and maximum flow.

  1. Isospectral Flows for the Inhomogeneous String Density Problem

    Science.gov (United States)

    Górski, Andrzej Z.; Szmigielski, Jacek

    2018-02-01

    We derive isospectral flows of the mass density in the string boundary value problem corresponding to general boundary conditions. In particular, we show that certain class of rational flows produces in a suitable limit all flows generated by polynomials in negative powers of the spectral parameter. We illustrate the theory with concrete examples of isospectral flows of discrete mass densities which we prove to be Hamiltonian and for which we provide explicit solutions of equations of motion in terms of Stieltjes continued fractions and Hankel determinants.

  2. A Novel Linear Programming Formulation of Maximum Lifetime Routing Problem in Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Cetin, Bilge Kartal; Prasad, Neeli R.; Prasad, Ramjee

    2011-01-01

    In wireless sensor networks, one of the key challenge is to achieve minimum energy consumption in order to maximize network lifetime. In fact, lifetime depends on many parameters: the topology of the sensor network, the data aggregation regime in the network, the channel access schemes, the routing...... protocols, and the energy model for transmission. In this paper, we tackle the routing challenge for maximum lifetime of the sensor network. We introduce a novel linear programming approach to the maximum lifetime routing problem. To the best of our knowledge, this is the first mathematical programming...

  3. A finite element method for flow problems in blast loading

    International Nuclear Information System (INIS)

    Forestier, A.; Lepareux, M.

    1984-06-01

    This paper presents a numerical method which describes fast dynamic problems in flow transient situations as in nuclear plants. A finite element formulation has been chosen; it is described by a preprocessor in CASTEM system: GIBI code. For these typical flow problems, an A.L.E. formulation for physical equations is used. So, some applications are presented: the well known problem of shock tube, the same one in 2D case and a last application to hydrogen detonation

  4. An extension of the maximum principle to dimensional systems and its application in nuclear engineering problems

    International Nuclear Information System (INIS)

    Gilai, D.

    1976-01-01

    The Maximum Principle deals with optimization problems of systems, which are governed by ordinary differential equations, and which include constraints on the state and control variables. The development of nuclear engineering confronted the designers of reactors, shielding and other nuclear devices with many requests of optimization and savings and it was straight forward to use the Maximum Principle for solving optimization problems in nuclear engineering, in fact, it was widely used both structural concept design and dynamic control of nuclear systems. The main disadvantage of the Maximum Principle is that it is suitable only for systems which may be described by ordinary differential equations, e.g. one dimensional systems. In the present work, starting from the variational approach, the original Maximum Principle is extended to multidimensional systems, and the principle which has been derived, is of a more general form and is applicable to any system which can be defined by linear partial differential equations of any order. To check out the applicability of the extended principle, two examples are solved: the first in nuclear shield design, where the goal is to construct a shield around a neutron emitting source, using given materials, so that the total dose outside of the shielding boundaries is minimized, the second in material distribution design in the core of a power reactor, so that the power peak is minimised. For the second problem, an iterative method was developed. (B.G.)

  5. Relationship between Maximum Principle and Dynamic Programming for Stochastic Recursive Optimal Control Problems and Applications

    Directory of Open Access Journals (Sweden)

    Jingtao Shi

    2013-01-01

    Full Text Available This paper is concerned with the relationship between maximum principle and dynamic programming for stochastic recursive optimal control problems. Under certain differentiability conditions, relations among the adjoint processes, the generalized Hamiltonian function, and the value function are given. A linear quadratic recursive utility portfolio optimization problem in the financial engineering is discussed as an explicitly illustrated example of the main result.

  6. Hardness and Approximation for Network Flow Interdiction

    OpenAIRE

    Chestnut, Stephen R.; Zenklusen, Rico

    2015-01-01

    In the Network Flow Interdiction problem an adversary attacks a network in order to minimize the maximum s-t-flow. Very little is known about the approximatibility of this problem despite decades of interest in it. We present the first approximation hardness, showing that Network Flow Interdiction and several of its variants cannot be much easier to approximate than Densest k-Subgraph. In particular, any $n^{o(1)}$-approximation algorithm for Network Flow Interdiction would imply an $n^{o(1)}...

  7. Problems of mixed convection flow regime map in a vertical cylinder

    International Nuclear Information System (INIS)

    Kang, Gyeong Uk; Chung, Bum Jin

    2012-01-01

    One of the technical issues by the development of the VHTR is the mixed convection, which is the regime of heat transfer that occurs when the driving forces of both forced and natural convection are of comparable orders of magnitude. In vertical internal flows, the buoyancy force acts upward only, but forced flows can move either upward or downward. Thus, there are two types of mixed convection flows, depending on the direction of the forced flow. When the directions of the forced flow and buoyancy are the same, the flow is a buoyancy aided flow; when they are opposite, the flow is a buoyancy opposed flow. In laminar flows, buoyancy aided flow shows enhanced heat transfer compared to the pure forced convection and buoyancy opposed flow shows impaired heat transfer due to the flow velocity affected by the buoyancy forces. In turbulent flows, however, buoyancy opposed flows shows enhanced heat transfer due to increased turbulence production and buoyancy aided flow shows impaired heat transfer at low buoyancy forces and as the buoyancy increases, the heat transfer restores and at further increases of the buoyancy forces, the heat transfer is enhanced. It is of primary interests to classify which convection regime is mainly dominant. The methods most used to classify between forced, mixed and natural convection have been to refer to the classical flow regime map suggested by Meta is and Eckert. During the course of fundamental literature studies on this topic, it is found that there are some problems on the flow regime map in a vertical cylinder. This paper is to discuss problems identified through reviewing the papers composed in the classical flow regime map. We have tried to reproduce the flow regime map independently using the data obtained from the literatures and compared with the classical flow regime map and finally, the problems on this topic were discussed

  8. Discrete maximum principle for FE solutions of the diffusion-reaction problem on prismatic meshes

    Czech Academy of Sciences Publication Activity Database

    Hannukainen, A.; Korotov, S.; Vejchodský, Tomáš

    2009-01-01

    Roč. 226, č. 2 (2009), s. 275-287 ISSN 0377-0427 R&D Projects: GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : diffusion-reaction problem * maximum principle * prismatic finite elements Subject RIV: BA - General Mathematics Impact factor: 1.292, year: 2009

  9. Comparison of the Spatiotemporal Variability of Temperature, Precipitation, and Maximum Daily Spring Flows in Two Watersheds in Quebec Characterized by Different Land Use

    Directory of Open Access Journals (Sweden)

    Ali A. Assani

    2016-01-01

    Full Text Available We compared the spatiotemporal variability of temperatures and precipitation with that of the magnitude and timing of maximum daily spring flows in the geographically adjacent L’Assomption River (agricultural and Matawin River (forested watersheds during the period from 1932 to 2013. With regard to spatial variability, fall, winter, and spring temperatures as well as total precipitation are higher in the agricultural watershed than in the forested one. The magnitude of maximum daily spring flows is also higher in the first watershed as compared with the second, owing to substantial runoff, given that the amount of snow that gives rise to these flows is not significantly different in the two watersheds. These flows occur early in the season in the agricultural watershed because of the relatively high temperatures. With regard to temporal variability, minimum temperatures increased over time in both watersheds. Maximum temperatures in the fall only increased in the agricultural watershed. The amount of spring rain increased over time in both watersheds, whereas total precipitation increased significantly in the agricultural watershed only. However, the amount of snow decreased in the forested watershed. The magnitude of maximum daily spring flows increased over time in the forested watershed.

  10. On the maximum-entropy method for kinetic equation of radiation, particle and gas

    International Nuclear Information System (INIS)

    El-Wakil, S.A.; Madkour, M.A.; Degheidy, A.R.; Machali, H.M.

    1995-01-01

    The maximum-entropy approach is used to calculate some problems in radiative transfer and reactor physics such as the escape probability, the emergent and transmitted intensities for a finite slab as well as the emergent intensity for a semi-infinite medium. Also, it is employed to solve problems involving spherical geometry, such as luminosity (the total energy emitted by a sphere), neutron capture probability and the albedo problem. The technique is also employed in the kinetic theory of gases to calculate the Poiseuille flow and thermal creep of a rarefied gas between two plates. Numerical calculations are achieved and compared with the published data. The comparisons demonstrate that the maximum-entropy results are good in agreement with the exact ones. (orig.)

  11. Network Model for The Problem of Integer Balancing of a Fourdimensional Matrix

    Directory of Open Access Journals (Sweden)

    A. V. Smirnov

    2016-01-01

    Full Text Available The problem of integer balancing of a four-dimensional matrix is studied. The elements of the inner part (all four indices are greater than zero of the given real matrix are summed in each direction and each two- and three-dimensional section of the matrix; the total sum is also found. These sums are placed into the elements where one or more indices are equal to zero (according to the summing directions. The problem is to find an integer matrix of the same structure, which can be produced from the initial one by replacing the elements with the largest previous or the smallest following integer. At the same time, the element with four zero indices should be produced with standard rules of rounding - off. In the article the problem of finding the maximum multiple flow in the network of any natural multiplicity   is also studied. There are arcs of three types: ordinary arcs, multiple arcs and multi-arcs. Each multiple and multi-arc is a union of   linked arcs, which are adjusted with each other. The network constructing rules are described. The definitions of a divisible network and some associated subjects are stated. There are defined the basic principles for reducing the integer balancing problem of an  -dimensional matrix (  to the problem of finding the maximum flow in a divisible multiple network of multiplicity  . There are stated the rules for reducing the four-dimensional balancing problem to the maximum flow problem in the network of multiplicity 5. The algorithm of finding the maximum flow, which meets the solvability conditions for the integer balancing problem, is formulated for such a network.

  12. An electromagnetism-like method for the maximum set splitting problem

    Directory of Open Access Journals (Sweden)

    Kratica Jozef

    2013-01-01

    Full Text Available In this paper, an electromagnetism-like approach (EM for solving the maximum set splitting problem (MSSP is applied. Hybrid approach consisting of the movement based on the attraction-repulsion mechanisms combined with the proposed scaling technique directs EM to promising search regions. Fast implementation of the local search procedure additionally improves the efficiency of overall EM system. The performance of the proposed EM approach is evaluated on two classes of instances from the literature: minimum hitting set and Steiner triple systems. The results show, except in one case, that EM reaches optimal solutions up to 500 elements and 50000 subsets on minimum hitting set instances. It also reaches all optimal/best-known solutions for Steiner triple systems.

  13. Numerical solution of pipe flow problems for generalized Newtonian fluids

    International Nuclear Information System (INIS)

    Samuelsson, K.

    1993-01-01

    In this work we study the stationary laminar flow of incompressible generalized Newtonian fluids in a pipe with constant arbitrary cross-section. The resulting nonlinear boundary value problems can be written in a variational formulation and solved using finite elements and the augmented Lagrangian method. The solution of the boundary value problem is obtained by finding a saddle point of the augmented Lagrangian. In the algorithm the nonlinear part of the equations is treated locally and the solution is obtained by iteration between this nonlinear problem and a global linear problem. For the solution of the linear problem we use the SSOR preconditioned conjugate gradient method. The approximating problem is solved on a sequence of adaptively refined grids. A scheme for adjusting the value of the crucial penalization parameter of the augmented Lagrangian is proposed. Applications to pipe flow and a problem from the theory of capacities are given. (author) (34 refs.)

  14. Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling

    Science.gov (United States)

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220

  15. Discrete bat algorithm for optimal problem of permutation flow shop scheduling.

    Science.gov (United States)

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.

  16. A new mathematical model for single machine batch scheduling problem for minimizing maximum lateness with deteriorating jobs

    Directory of Open Access Journals (Sweden)

    Ahmad Zeraatkar Moghaddam

    2012-01-01

    Full Text Available This paper presents a mathematical model for the problem of minimizing the maximum lateness on a single machine when the deteriorated jobs are delivered to each customer in various size batches. In reality, this issue may happen within a supply chain in which delivering goods to customers entails cost. Under such situation, keeping completed jobs to deliver in batches may result in reducing delivery costs. In literature review of batch scheduling, minimizing the maximum lateness is known as NP-Hard problem; therefore the present issue aiming at minimizing the costs of delivering, in addition to the aforementioned objective function, remains an NP-Hard problem. In order to solve the proposed model, a Simulation annealing meta-heuristic is used, where the parameters are calibrated by Taguchi approach and the results are compared to the global optimal values generated by Lingo 10 software. Furthermore, in order to check the efficiency of proposed method to solve larger scales of problem, a lower bound is generated. The results are also analyzed based on the effective factors of the problem. Computational study validates the efficiency and the accuracy of the presented model.

  17. Marriage in Honey Bees Optimization Algorithm for Flow-shop Problems

    Directory of Open Access Journals (Sweden)

    Pedro PALOMINOS

    2012-01-01

    Full Text Available The objective of this work is to make a comparative study of the Marriage in Honeybees Op-timization (MBO metaheuristic for flow-shop scheduling problems. This paper is focused on the design possibilities of the mating flight space shared by queens and drones. The proposed algorithm uses a 2-dimensional torus as an explicit mating space instead of the simulated an-nealing one in the original MBO. After testing different alternatives with benchmark datasets, the results show that the modeled and implemented metaheuristic is effective to solve flow-shop type problems, providing a new approach to solve other NP-Hard problems.

  18. Flow Control in Wells Turbines for Harnessing Maximum Wave Power

    Science.gov (United States)

    Garrido, Aitor J.; Garrido, Izaskun; Otaola, Erlantz; Maseda, Javier

    2018-01-01

    Oceans, and particularly waves, offer a huge potential for energy harnessing all over the world. Nevertheless, the performance of current energy converters does not yet allow us to use the wave energy efficiently. However, new control techniques can improve the efficiency of energy converters. In this sense, the plant sensors play a key role within the control scheme, as necessary tools for parameter measuring and monitoring that are then used as control input variables to the feedback loop. Therefore, the aim of this work is to manage the rotational speed control loop in order to optimize the output power. With the help of outward looking sensors, a Maximum Power Point Tracking (MPPT) technique is employed to maximize the system efficiency. Then, the control decisions are based on the pressure drop measured by pressure sensors located along the turbine. A complete wave-to-wire model is developed so as to validate the performance of the proposed control method. For this purpose, a novel sensor-based flow controller is implemented based on the different measured signals. Thus, the performance of the proposed controller has been analyzed and compared with a case of uncontrolled plant. The simulations demonstrate that the flow control-based MPPT strategy is able to increase the output power, and they confirm both the viability and goodness. PMID:29439408

  19. Flow Control in Wells Turbines for Harnessing Maximum Wave Power.

    Science.gov (United States)

    Lekube, Jon; Garrido, Aitor J; Garrido, Izaskun; Otaola, Erlantz; Maseda, Javier

    2018-02-10

    Oceans, and particularly waves, offer a huge potential for energy harnessing all over the world. Nevertheless, the performance of current energy converters does not yet allow us to use the wave energy efficiently. However, new control techniques can improve the efficiency of energy converters. In this sense, the plant sensors play a key role within the control scheme, as necessary tools for parameter measuring and monitoring that are then used as control input variables to the feedback loop. Therefore, the aim of this work is to manage the rotational speed control loop in order to optimize the output power. With the help of outward looking sensors, a Maximum Power Point Tracking (MPPT) technique is employed to maximize the system efficiency. Then, the control decisions are based on the pressure drop measured by pressure sensors located along the turbine. A complete wave-to-wire model is developed so as to validate the performance of the proposed control method. For this purpose, a novel sensor-based flow controller is implemented based on the different measured signals. Thus, the performance of the proposed controller has been analyzed and compared with a case of uncontrolled plant. The simulations demonstrate that the flow control-based MPPT strategy is able to increase the output power, and they confirm both the viability and goodness.

  20. On the use Pontryagin's maximum principle in the reactor profiling problem

    International Nuclear Information System (INIS)

    Silko, P.P.

    1976-01-01

    The optimal given power profile approximation problem in nuclear reactors is posed as one of physical profiling problems in terms of the theory of optimal processes. It is necessary to distribute the concentration of the profiling substance in a certain nuclear reactor in such a way that the power profile obtained in the core would be as near as possible to the given profile. It is suggested that the original system of differential equations describing the behaviour of neutrons in a reactor and some applied requirements may be written in the form of usual differential equations of the first order. The integral quadratic criterion evaluating a deviation of the power profile obtained in a reactor from the given one is used as a purpose function. The initial state is given, the control aim is determined as the necessity of transfer of a control object from the initial state to the given set of finite states known as a purpose set. A class of permissible controls consists of measurable functions in the given range. On solving the formulated problem Pontryagin's maximum principle is used. As an example, the power profile flattening problem is considered, for which a program in Fortran-4 for the 'Minsk-32' computer has been written. The optimal reactor parameters calculated by this program at various boundary values of the control are presented. It is noticed that a type of the optimal reactor configuration depends on boundary values of the control

  1. Solving Minimum Cost Multi-Commodity Network Flow Problem ...

    African Journals Online (AJOL)

    ADOWIE PERE

    2018-03-23

    Mar 23, 2018 ... network-based modeling framework for integrated fixed and mobile ... Minimum Cost Network Flow Problem (MCNFP) and some ..... Unmanned Aerial Vehicle Routing in Traffic. Incident ... Ph.D. Thesis, Dept. of Surveying &.

  2. Comparing branch-and-price algorithms for the Multi-Commodity k-splittable Maximum Flow Problem

    DEFF Research Database (Denmark)

    Gamst, Mette; Petersen, Bjørn

    2012-01-01

    -Protocol Label Switching. The problem has previously been solved to optimality through branch-and-price. In this paper we propose two exact solution methods both based on an alternative decomposition. The two methods differ in their branching strategy. The first method, which branches on forbidden edge sequences...

  3. Wood flow problems in the Swedish forestry

    Energy Technology Data Exchange (ETDEWEB)

    Carlsson, Dick [Forestry Research Inst. of Sweden, Uppsala (Sweden); Roennqvist, M. [Linkoeping Univ. (Sweden). Dept. of Mathematics

    1998-12-31

    In this paper we give an overview of the wood-flow in Sweden including a description of organization and planning. Based on that, we will describe a number of applications or problem areas in the wood-flow chain that are currently considered by the Swedish forest companies to be important and potential in order to improve overall operations. We have focused on applications which are short term planning or operative planning. We do not give any final results as much of the development is currently ongoing or is still in a planning phase. Instead we describe what kind of models and decision support systems that could be applied in order to improve co-operation within and integration of the wood-flow chain 13 refs, 20 figs, 1 tab

  4. Parallel Simulation of Three-Dimensional Free Surface Fluid Flow Problems

    International Nuclear Information System (INIS)

    BAER, THOMAS A.; SACKINGER, PHILIP A.; SUBIA, SAMUEL R.

    1999-01-01

    Simulation of viscous three-dimensional fluid flow typically involves a large number of unknowns. When free surfaces are included, the number of unknowns increases dramatically. Consequently, this class of problem is an obvious application of parallel high performance computing. We describe parallel computation of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact fines. The Galerkin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-static solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of unknowns. Other issues discussed are the proper constraints appearing along the dynamic contact line in three dimensions. Issues affecting efficient parallel simulations include problem decomposition to equally distribute computational work among a SPMD computer and determination of robust, scalable preconditioners for the distributed matrix systems that must be solved. Solution continuation strategies important for serial simulations have an enhanced relevance in a parallel coquting environment due to the difficulty of solving large scale systems. Parallel computations will be demonstrated on an example taken from the coating flow industry: flow in the vicinity of a slot coater edge. This is a three dimensional free surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another region. As such, a significant fraction of the computational time is devoted to processing boundary data. Discussion focuses on parallel speed ups for fixed problem size, a class of problems of immediate practical importance

  5. Effect of flow conditions on flow accelerated corrosion in pipe bends

    International Nuclear Information System (INIS)

    Mazhar, H.; Ching, C.Y.

    2015-01-01

    Flow Accelerated Corrosion (FAC) in piping systems is a safety and reliability problem in the nuclear industry. In this study, the pipe wall thinning rates and development of surface roughness in pipe bends are compared for single phase and two phase annular flow conditions. The FAC rates were measured using the dissolution of test sections cast from gypsum in water with a Schmidt number of 1280. The change in location and levels of maximum FAC under single phase and two phase flow conditions are examined. The comparison of the relative roughness indicates a higher effect for the surface roughness in single phase flow than in two phase flow. (author)

  6. Literature Review on the Hybrid Flow Shop Scheduling Problem with Unrelated Parallel Machines

    Directory of Open Access Journals (Sweden)

    Eliana Marcela Peña Tibaduiza

    2017-01-01

    Full Text Available Context: The flow shop hybrid problem with unrelated parallel machines has been less studied in the academia compared to the flow shop hybrid with identical processors. For this reason, there are few reports about the kind of application of this problem in industries. Method: A literature review of the state of the art on flow-shop scheduling problem was conducted by collecting and analyzing academic papers on several scientific databases. For this aim, a search query was constructed using keywords defining the problem and checking the inclusion of unrelated parallel machines in such definition; as a result, 50 papers were finally selected for this study. Results: A classification of the problem according to the characteristics of the production system was performed, also solution methods, constraints and objective functions commonly used are presented. Conclusions: An increasing trend is observed in studies of flow shop with multiple stages, but few are based on industry case-studies.

  7. Numerical optimization using flow equations

    Science.gov (United States)

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  8. AEROSOL NUCLEATION AND GROWTH DURING LAMINAR TUBE FLOW: MAXIMUM SATURATIONS AND NUCLEATION RATES. (R827354C008)

    Science.gov (United States)

    An approximate method of estimating the maximum saturation, the nucleation rate, and the total number nucleated per second during the laminar flow of a hot vapour–gas mixture along a tube with cold walls is described. The basis of the approach is that the temperature an...

  9. Dual plane problems for creeping flow of power-law incompressible medium

    Directory of Open Access Journals (Sweden)

    Dmitriy S. Petukhov

    2016-09-01

    Full Text Available In this paper, we consider the class of solutions for a creeping plane flow of incompressible medium with power-law rheology, which are written in the form of the product of arbitrary power of the radial coordinate by arbitrary function of the angular coordinate of the polar coordinate system covering the plane. This class of solutions represents the asymptotics of fields in the vicinity of singular points in the domain occupied by the examined medium. We have ascertained the duality of two problems for a plane with wedge-shaped notch, at which boundaries in one of the problems the vector components of the surface force vanish, while in the other—the vanishing components are the vector components of velocity, We have investigated the asymptotics and eigensolutions of the dual nonlinear eigenvalue problems in relation to the rheological exponent and opening angle of the notch for the branch associated with the eigenvalue of the Hutchinson–Rice–Rosengren problem learned from the problem of stress distribution over a notched plane for a power law medium. In the context of the dual problem we have determined the velocity distribution in the flow of power-law medium at the vertex of a rigid wedge, We have also found another two eigenvalues, one of which was determined by V. V. Sokolovsky for the problem of power-law fluid flow in a convergent channel.

  10. Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2004-01-01

    )-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...... has been compared to the cross-correlation (CC) estimator and the previously developed maximum likelihood estimator (MLE). The results show that the CMLE can handle a larger velocity search range and is capable of estimating even low velocity levels from tissue motion. The CC and the MLE produce...... for the CC and the MLE. When the velocity search range is set to twice the limit of the CC and the MLE, the number of incorrect velocity estimates are 0, 19.1, and 7.2% for the CMLE, CC, and MLE, respectively. The ability to handle a larger search range and estimating low velocity levels was confirmed...

  11. Optimal Control of Polymer Flooding Based on Maximum Principle

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2012-01-01

    Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and the inequality constraint as the polymer concentration limitation. To cope with the optimal control problem (OCP of this DPS, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s weak maximum principle. A gradient method is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.

  12. A Hybrid Metaheuristic Approach for Minimizing the Total Flow Time in A Flow Shop Sequence Dependent Group Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Antonio Costa

    2014-07-01

    Full Text Available Production processes in Cellular Manufacturing Systems (CMS often involve groups of parts sharing the same technological requirements in terms of tooling and setup. The issue of scheduling such parts through a flow-shop production layout is known as the Flow-Shop Group Scheduling (FSGS problem or, whether setup times are sequence-dependent, the Flow-Shop Sequence-Dependent Group Scheduling (FSDGS problem. This paper addresses the FSDGS issue, proposing a hybrid metaheuristic procedure integrating features from Genetic Algorithms (GAs and Biased Random Sampling (BRS search techniques with the aim of minimizing the total flow time, i.e., the sum of completion times of all jobs. A well-known benchmark of test cases, entailing problems with two, three, and six machines, is employed for both tuning the relevant parameters of the developed procedure and assessing its performances against two metaheuristic algorithms recently presented by literature. The obtained results and a properly arranged ANOVA analysis highlight the superiority of the proposed approach in tackling the scheduling problem under investigation.

  13. Collapsing of multigroup cross sections in optimization problems solved by means of the maximum principle of Pontryagin

    International Nuclear Information System (INIS)

    Anton, V.

    1979-05-01

    A new formulation of multigroup cross section collapsing based on the conservation of point or zone value of hamiltonian is presented. This attempt is proper to optimization problems solved by means of maximum principle of Pontryagin. (author)

  14. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  15. Contribution of Fuzzy Minimal Cost Flow Problem by Possibility Programming

    OpenAIRE

    S. Fanati Rashidi; A. A. Noora

    2010-01-01

    Using the concept of possibility proposed by zadeh, luhandjula ([4,8]) and buckley ([1]) have proposed the possibility programming. The formulation of buckley results in nonlinear programming problems. Negi [6]re-formulated the approach of Buckley by the use of trapezoidal fuzzy numbers and reduced the problem into fuzzy linear programming problem. Shih and Lee ([7]) used the Negi approach to solve a minimum cost flow problem, whit fuzzy costs and the upper and lower bound. ...

  16. A point implicit time integration technique for slow transient flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Kadioglu, Samet Y., E-mail: kadioglu@yildiz.edu.tr [Department of Mathematical Engineering, Yildiz Technical University, 34210 Davutpasa-Esenler, Istanbul (Turkey); Berry, Ray A., E-mail: ray.berry@inl.gov [Idaho National Laboratory, P.O. Box 1625, MS 3840, Idaho Falls, ID 83415 (United States); Martineau, Richard C. [Idaho National Laboratory, P.O. Box 1625, MS 3840, Idaho Falls, ID 83415 (United States)

    2015-05-15

    Highlights: • This new method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods. • It is unconditionally stable, as a fully implicit method would be. • It exhibits the simplicity of implementation of an explicit method. • It is specifically designed for slow transient flow problems of long duration such as can occur inside nuclear reactor coolant systems. • Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust. - Abstract: We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation of explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very

  17. A point implicit time integration technique for slow transient flow problems

    International Nuclear Information System (INIS)

    Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.

    2015-01-01

    Highlights: • This new method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods. • It is unconditionally stable, as a fully implicit method would be. • It exhibits the simplicity of implementation of an explicit method. • It is specifically designed for slow transient flow problems of long duration such as can occur inside nuclear reactor coolant systems. • Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust. - Abstract: We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation of explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very

  18. Flow-shop scheduling problem under uncertainties: Review and trends

    Directory of Open Access Journals (Sweden)

    Eliana María González-Neira

    2017-03-01

    Full Text Available Among the different tasks in production logistics, job scheduling is one of the most important at the operational decision-making level to enable organizations to achieve competiveness. Scheduling consists in the allocation of limited resources to activities over time in order to achieve one or more optimization objectives. Flow-shop (FS scheduling problems encompass the sequencing processes in environments in which the activities or operations are performed in a serial flow. This type of configuration includes assembly lines and the chemical, electronic, food, and metallurgical industries, among others. Scheduling has been mostly investigated for the deterministic cases, in which all parameters are known in advance and do not vary over time. Nevertheless, in real-world situations, events are frequently subject to uncertainties that can affect the decision-making process. Thus, it is important to study scheduling and sequencing activities under uncertainties since they can cause infeasibilities and disturbances. The purpose of this paper is to provide a general overview of the FS scheduling problem under uncertainties and its role in production logistics and to draw up opportunities for further research. To this end, 100 papers about FS and flexible flow-shop scheduling problems published from 2001 to October 2016 were analyzed and classified. Trends in the reviewed literature are presented and finally some research opportunities in the field are proposed.

  19. Flow-shop scheduling problem under uncertainties: Review and trends

    OpenAIRE

    Eliana María González-Neira; Jairo R. Montoya-Torres; David Barrera

    2017-01-01

    Among the different tasks in production logistics, job scheduling is one of the most important at the operational decision-making level to enable organizations to achieve competiveness. Scheduling consists in the allocation of limited resources to activities over time in order to achieve one or more optimization objectives. Flow-shop (FS) scheduling problems encompass the sequencing processes in environments in which the activities or operations are performed in a serial flow. This type of co...

  20. Exact partial solution to the compressible flow problems of jet formation and penetration in plane, steady flow

    International Nuclear Information System (INIS)

    Karpp, R.R.

    1984-01-01

    The particle solution of the problem of the symmetric impact of two compressible fluid stream is derived. The plane two-dimensional flow is assumed to be steady, and the inviscid compressible fluid is of the Chaplygin (tangent gas) type. The equations governing this flow are transformed to the hodograph plane where an exact, closed-form solution for the stream function is obtained. The distribution of fluid properties along the plane of symmetry and the shape of free surface streamlines are determined by transformation back to the physical plane. The problem of a compressible fluid jet penetrating an infinite target of similar material is also solved by considering a limiting case of this solution. Differences between compressible and incompressible flows of the type considered are illustrated

  1. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  2. Strong Maximum Principle for Multi-Term Time-Fractional Diffusion Equations and its Application to an Inverse Source Problem

    OpenAIRE

    Liu, Yikan

    2015-01-01

    In this paper, we establish a strong maximum principle for fractional diffusion equations with multiple Caputo derivatives in time, and investigate a related inverse problem of practical importance. Exploiting the solution properties and the involved multinomial Mittag-Leffler functions, we improve the weak maximum principle for the multi-term time-fractional diffusion equation to a stronger one, which is parallel to that for its single-term counterpart as expected. As a direct application, w...

  3. Research on network maximum flows algorithm of cascade level graph%级连层次图的网络最大流算法研究

    Institute of Scientific and Technical Information of China (English)

    潘荷新; 伊崇信; 李满

    2011-01-01

    给出一种通过构造网络级连层次图的方法,来间接求出最大网络流的算法.对于给定的有n个顶点,P条边的网络N=(G,s,t,C),该算法可在O(n2)时间内快速求出流经网络N的最大网络流及达最大流时的网络流.%This paper gives an algoritm that structures a network cascade level graph to find out maximum flow of the network indirectly.For the given network N=(G,s,t,C) that has n vetexes and e arcs,this algorithm finds out the maximum value of the network flow fast in O(n2) time that flows from the network N and the network flows when the value of the one reach maximum.

  4. Tracking the maximum efficiency point for the FC system based on extremum seeking scheme to control the air flow

    International Nuclear Information System (INIS)

    Bizon, Nicu

    2014-01-01

    Highlights: • The Maximum Efficiency Point (MEP) is tracked based on air flow rate. • The proposed Extremum Seeking (ES) control assures high performances. • About 10 kW/s search speed and 99.99% stationary accuracy can be obtained. • The energy efficiency increases with 3–12%, according to the power losses. • The control strategy is robust based on self-optimizing ES scheme proposed. - Abstract: An advanced control of the air compressor for the Proton Exchange Membrane Fuel Cell (PEMFC) system is proposed in this paper based on Extremum Seeking (ES) control scheme. The FC net power is mainly depended on the air and hydrogen flow rate and pressure, and heat and water management. This paper proposes to compute the optimal value for the air flow rate based on the advanced ES control scheme in order to maximize the FC net power. In this way, the Maximum Efficiency Point (MEP) will be tracked in real time, with about 10 kW/s search speed and a stationary accuracy of 0.99. Thus, energy efficiency will be close to the maximum value that can be obtained for a given PEMFC stack and compressor group under dynamic load. It is shown that the MEP tracking allows an increasing of the FC net power with 3–12%, depending on the percentage of the FC power supplied to the compressor and the level of the load power. Simulations shows that the performances mentioned above are effective

  5. Topology optimization of unsteady flow problems using the lattice Boltzmann method

    DEFF Research Database (Denmark)

    Nørgaard, Sebastian Arlund; Sigmund, Ole; Lazarov, Boyan Stefanov

    2016-01-01

    This article demonstrates and discusses topology optimization for unsteady incompressible fluid flows. The fluid flows are simulated using the lattice Boltzmann method, and a partial bounceback model is implemented to model the transition between fluid and solid phases in the optimization problems...

  6. Progress with multigrid schemes for hypersonic flow problems

    International Nuclear Information System (INIS)

    Radespiel, R.; Swanson, R.C.

    1995-01-01

    Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm employs upwind spatial discretization with explicit multistage time stepping. Two-level versions of the various multigrid algorithms are applied to the two-dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving three different hypersonic flow problems. Some new multigrid schemes based on semicoarsening strategies are shown to be quite effective in relieving the stiffness caused by the high-aspect-ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 X 10 6 and Mach numbers up to 25. 32 refs., 31 figs., 1 tab

  7. Topology optimization of 3D Stokes flow problems

    DEFF Research Database (Denmark)

    Gersborg-Hansen, Allan; Sigmund, Ole; Bendsøe, Martin P.

    fluid mechanics. In future practice a muTAS could be used by doctors, engineers etc. as a hand held device with short reaction time that provides on-site analysis of a flowing substance such as blood, polluted water or similar. Borrvall and Petersson [2] paved the road for using the topology...... particular at micro scales since they are easily manufacturable and maintenance free. Here we consider topology optimization of 3D Stokes flow problems which is a reasonable fluid model to use at small scales. The presentation elaborates on effects caused by 3D fluid modelling on the design. Numerical...

  8. Molecular Sticker Model Stimulation on Silicon for a Maximum Clique Problem

    Directory of Open Access Journals (Sweden)

    Jianguo Ning

    2015-06-01

    Full Text Available Molecular computers (also called DNA computers, as an alternative to traditional electronic computers, are smaller in size but more energy efficient, and have massive parallel processing capacity. However, DNA computers may not outperform electronic computers owing to their higher error rates and some limitations of the biological laboratory. The stickers model, as a typical DNA-based computer, is computationally complete and universal, and can be viewed as a bit-vertically operating machine. This makes it attractive for silicon implementation. Inspired by the information processing method on the stickers computer, we propose a novel parallel computing model called DEM (DNA Electronic Computing Model on System-on-a-Programmable-Chip (SOPC architecture. Except for the significant difference in the computing medium—transistor chips rather than bio-molecules—the DEM works similarly to DNA computers in immense parallel information processing. Additionally, a plasma display panel (PDP is used to show the change of solutions, and helps us directly see the distribution of assignments. The feasibility of the DEM is tested by applying it to compute a maximum clique problem (MCP with eight vertices. Owing to the limited computing sources on SOPC architecture, the DEM could solve moderate-size problems in polynomial time.

  9. Finite element methods for incompressible flow problems

    CERN Document Server

    John, Volker

    2016-01-01

    This book explores finite element methods for incompressible flow problems: Stokes equations, stationary Navier-Stokes equations, and time-dependent Navier-Stokes equations. It focuses on numerical analysis, but also discusses the practical use of these methods and includes numerical illustrations. It also provides a comprehensive overview of analytical results for turbulence models. The proofs are presented step by step, allowing readers to more easily understand the analytical techniques.

  10. Field-aligned flows of H+ and He+ in the mid-latitude topside ionosphere at solar maximum

    International Nuclear Information System (INIS)

    Bailey, G.J.; Sellek, R.

    1992-01-01

    A time-dependent mathematical model of the Earth's ionosphere and plasmasphere has been used to investigate the field-aligned flows of H + and He + in the topside ionosphere at L = 3 during solar maximum. When the flux-tube content is low there are upward flows of H + and He + during daytime in both the winter and summer topside ionospheres. During winter night-time the directions of flow are, in general, downwards for He + , because of the night-time decrease in He + scale height, and upwards for H + , because of the replenishment needs of the flux tube. In the winter topside ionosphere, during the later stages of flux-tube replenishment, H + generally flows downwards during both day and night as a result of the greater plasma pressure in the summer hemisphere whilst He + flows upwards during the day and downwards at night. In the summer topside ionosphere H + flows upward to replace the H + lost from the plasmasphere to the winter topside ionosphere whilst the winter helium bulge leads to flows of He + that are in the direction winter hemisphere to summer hemisphere. When the flux-tube content is low, counterstreaming of H + and He + , with H + flowing upwards and He + downwards, occurs for most of the day above about 5000 km altitude in the summer hemisphere. There are occurrences of this type of counterstreaming in both the summer and winter hemispheres during the night. When the flux-tube content is high, counterstreaming of H + and He + occurs less frequently and over smaller regions of the flux tube. There are regions in both hemispheres where H + flows downwards whilst He + flows upwards. (Author)

  11. A review of scheduling problem and resolution methods in flexible flow shop

    Directory of Open Access Journals (Sweden)

    Tian-Soon Lee

    2019-01-01

    Full Text Available The Flexible flow shop (FFS is defined as a multi-stage flow shops with multiple parallel machines. FFS scheduling problem is a complex combinatorial problem which has been intensively studied in many real world industries. This review paper gives a comprehensive exploration review on the FFS scheduling problem and guides the reader by considering and understanding different environmental assumptions, system constraints and objective functions for future research works. The published papers are classified into two categories. First is the FFS system characteristics and constraints including the problem differences and limitation defined by different studies. Second, the scheduling performances evaluation are elaborated and categorized into time, job and multi related objectives. In addition, the resolution approaches that have been used to solve FFS scheduling problems are discussed. This paper gives a comprehensive guide for the reader with respect to future research work on the FFS scheduling problem.

  12. A trust region interior point algorithm for optimal power flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Wang Min [Hefei University of Technology (China). Dept. of Electrical Engineering and Automation; Liu Shengsong [Jiangsu Electric Power Dispatching and Telecommunication Company (China). Dept. of Automation

    2005-05-01

    This paper presents a new algorithm that uses the trust region interior point method to solve nonlinear optimal power flow (OPF) problems. The OPF problem is solved by a primal/dual interior point method with multiple centrality corrections as a sequence of linearized trust region sub-problems. It is the trust region that controls the linear step size and ensures the validity of the linear model. The convergence of the algorithm is improved through the modification of the trust region sub-problem. Numerical results of standard IEEE systems and two realistic networks ranging in size from 14 to 662 buses are presented. The computational results show that the proposed algorithm is very effective to optimal power flow applications, and favors the successive linear programming (SLP) method. Comparison with the predictor/corrector primal/dual interior point (PCPDIP) method is also made to demonstrate the superiority of the multiple centrality corrections technique. (author)

  13. High order methods for incompressible fluid flow: Application to moving boundary problems

    Energy Technology Data Exchange (ETDEWEB)

    Bjoentegaard, Tormod

    2008-04-15

    Fluid flows with moving boundaries are encountered in a large number of real life situations, with two such types being fluid-structure interaction and free-surface flows. Fluid-structure phenomena are for instance apparent in many hydrodynamic applications; wave effects on offshore structures, sloshing and fluid induced vibrations, and aeroelasticity; flutter and dynamic response. Free-surface flows can be considered as a special case of a fluid-fluid interaction where one of the fluids are practically inviscid, such as air. This type of flows arise in many disciplines such as marine hydrodynamics, chemical engineering, material processing, and geophysics. The driving forces for free-surface flows may be of large scale such as gravity or inertial forces, or forces due to surface tension which operate on a much smaller scale. Free-surface flows with surface tension as a driving mechanism include the flow of bubbles and droplets, and the evolution of capillary waves. In this work we consider incompressible fluid flow, which are governed by the incompressible Navier-Stokes equations. There are several challenges when simulating moving boundary problems numerically, and these include - Spatial discretization - Temporal discretization - Imposition of boundary conditions - Solution strategy for the linear equations. These are some of the issues which will be addressed in this introduction. We will first formulate the problem in the arbitrary Lagrangian-Eulerian framework, and introduce the weak formulation of the problem. Next, we discuss the spatial and temporal discretization before we move to the imposition of surface tension boundary conditions. In the final section we discuss the solution of the resulting linear system of equations. (Author). refs., figs., tabs

  14. SCEPTIC, Pressure Drop, Flow Rate, Heat Transfer, Temperature in Reactor Heat Exchanger

    International Nuclear Information System (INIS)

    Kattchee, N.; Reynolds, W.C.

    1975-01-01

    1 - Nature of physical problem solved: SCEPTIC is a program for calculating pressure drop, flow rates, heat transfer rates, and temperature in heat exchangers such as fuel elements of typical gas or liquid cooled nuclear reactors. The effects of turbulent and heat interchange between flow passages are considered. 2 - Method of solution: The computation procedure amounts to a nodal of lumped parameter type of calculation. The axial mesh size is automatically selected to assure that a prescribed accuracy of results is obtained. 3 - Restrictions on the complexity of the problem: Maximum number of subchannels is 25, maximum number of heated surfaces is 46

  15. Adaptive boundary conditions for exterior flow problems

    CERN Document Server

    Boenisch, V; Wittwer, S

    2003-01-01

    We consider the problem of solving numerically the stationary incompressible Navier-Stokes equations in an exterior domain in two dimensions. This corresponds to studying the stationary fluid flow past a body. The necessity to truncate for numerical purposes the infinite exterior domain to a finite domain leads to the problem of finding appropriate boundary conditions on the surface of the truncated domain. We solve this problem by providing a vector field describing the leading asymptotic behavior of the solution. This vector field is given in the form of an explicit expression depending on a real parameter. We show that this parameter can be determined from the total drag exerted on the body. Using this fact we set up a self-consistent numerical scheme that determines the parameter, and hence the boundary conditions and the drag, as part of the solution process. We compare the values of the drag obtained with our adaptive scheme with the results from using traditional constant boundary conditions. Computati...

  16. Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution

    Energy Technology Data Exchange (ETDEWEB)

    Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhao, Changhong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zamzam, Admed S. [University of Minnesota; Sidiropoulos, Nicholas D. [University of Minnesota; Taylor, Josh A. [University of Toronto

    2018-01-12

    This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successive convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.

  17. Contribution of Fuzzy Minimal Cost Flow Problem by Possibility Programming

    Directory of Open Access Journals (Sweden)

    S. Fanati Rashidi

    2010-06-01

    Full Text Available Using the concept of possibility proposed by zadeh, luhandjula ([4,8] and buckley ([1] have proposed the possibility programming. The formulation of buckley results in nonlinear programming problems. Negi [6]re-formulated the approach of Buckley by the use of trapezoidal fuzzy numbers and reduced the problem into fuzzy linear programming problem. Shih and Lee ([7] used the Negi approach to solve a minimum cost flow problem, whit fuzzy costs and the upper and lower bound. In this paper we shall consider the general form of this problem where all of the parameters and variables are fuzzy and also a model for solving is proposed

  18. Analytical methods for heat transfer and fluid flow problems

    CERN Document Server

    Weigand, Bernhard

    2015-01-01

    This book describes useful analytical methods by applying them to real-world problems rather than solving the usual over-simplified classroom problems. The book demonstrates the applicability of analytical methods even for complex problems and guides the reader to a more intuitive understanding of approaches and solutions. Although the solution of Partial Differential Equations by numerical methods is the standard practice in industries, analytical methods are still important for the critical assessment of results derived from advanced computer simulations and the improvement of the underlying numerical techniques. Literature devoted to analytical methods, however, often focuses on theoretical and mathematical aspects and is therefore useless to most engineers. Analytical Methods for Heat Transfer and Fluid Flow Problems addresses engineers and engineering students. The second edition has been updated, the chapters on non-linear problems and on axial heat conduction problems were extended. And worked out exam...

  19. Determination of free boundary problem of flow through porous media

    International Nuclear Information System (INIS)

    Tavares Junior, H.M.; Souza, A.J. de

    1989-01-01

    This paper deals with a free boundary problem of flow through porous media, which is solved by simplicial method conbined with mesh refinement. Variational method on fixed domain is utilized. (author)

  20. Maximum Solutions of Normalized Ricci Flow on 4-Manifolds

    Science.gov (United States)

    Fang, Fuquan; Zhang, Yuguang; Zhang, Zhenlei

    2008-10-01

    We consider the maximum solution g( t), t ∈ [0, + ∞), to the normalized Ricci flow. Among other things, we prove that, if ( M, ω) is a smooth compact symplectic 4-manifold such that {b_2^+(M) > 1} and let g( t), t ∈ [0, ∞), be a solution to (1.3) on M whose Ricci curvature satisfies that |Ric( g( t))| ≤ 3 and additionally χ( M) = 3τ ( M) > 0, then there exists an {min mathbb{N}} , and a sequence of points { x j, k ∈ M}, j = 1, . . . , m, satisfying that, by passing to a subsequence, {{(M, g(tk+t), x_{1,k},ldots, x_{m,k})stackrel{d_{GH}}longrightarrow ({\\coprod limitsm_{j=1}} N_j , g_{infty}, x_{1,infty}, ldots, x_{m,infty}),}} t ∈ [0, ∞), in the m-pointed Gromov-Hausdorff sense for any sequence t k → ∞, where ( N j , g ∞), j = 1, . . . , m, are complete complex hyperbolic orbifolds of complex dimension 2 with at most finitely many isolated orbifold points. Moreover, the convergence is C ∞ in the non-singular part of {\\coprod _1^m Nj} and {text{Vol}_{g0}(M)=sum_{j=1}mtext{Vol}_{g_{infty}}(Nj)} , where χ( M) (resp. τ( M)) is the Euler characteristic (resp. signature) of M.

  1. Problems of unsteady temperature measurements in a pulsating flow of gas

    International Nuclear Information System (INIS)

    Olczyk, A

    2008-01-01

    Unsteady flow temperature is one of the most difficult and complex flow parameters to measure. Main problems concern insufficient dynamic properties of applied sensors and an interpretation of recorded signals, composed of static and dynamic temperatures. An attempt is made to solve these two problems in the case of measurements conducted in a pulsating flow of gas in the 0–200 Hz range of frequencies, which corresponds to real conditions found in exhaust pipes of modern diesel engines. As far as sensor dynamics is concerned, an analysis of requirements related to the thermometer was made, showing that there was no possibility of assuring such a high frequency band within existing solutions. Therefore, a method of double-channel correction of sensor dynamics was proposed and experimentally tested. The results correspond well with the calculations made by means of the proposed model of sensor dynamics. In the case of interpretation of the measured temperature signal, a method for distinguishing its two components was proposed. This decomposition considerably helps with a correct interpretation of unsteady flow phenomena in pipes

  2. Electrical Discharge Platinum Machining Optimization Using Stefan Problem Solutions

    Directory of Open Access Journals (Sweden)

    I. B. Stavitskiy

    2015-01-01

    Full Text Available The article presents the theoretical study results of platinum workability by electrical discharge machining (EDM, based on the solution of the thermal problem of moving the boundary of material change phase, i.e. Stefan problem. The problem solution enables defining the surface melt penetration of the material under the heat flow proceeding from the time of its action and the physical properties of the processed material. To determine the rational EDM operating conditions of platinum the article suggests relating its workability with machinability of materials, for which the rational EDM operating conditions are, currently, defined. It is shown that at low densities of the heat flow corresponding to the finishing EDM operating conditions, the processing conditions used for steel 45 are appropriate for platinum machining; with EDM at higher heat flow densities (e.g. 50 GW / m2 for this purpose copper processing conditions are used; at the high heat flow densities corresponding to heavy roughing EDM it is reasonable to use tungsten processing conditions. The article also represents how the minimum width of the current pulses, at which platinum starts melting and, accordingly, the EDM process becomes possible, depends on the heat flow density. It is shown that the processing of platinum is expedient at a pulse width corresponding to the values, called the effective pulse width. Exceeding these values does not lead to a substantial increase in removal of material per pulse, but considerably reduces the maximum repetition rate and therefore, the EDM capacity. The paper shows the effective pulse width versus the heat flow density. It also presents the dependences of the maximum platinum surface melt penetration and the corresponding pulse width on the heat flow density. Results obtained using solutions of the Stephen heat problem can be used to optimize EDM operating conditions of platinum machining.

  3. Improved teaching-learning-based and JAYA optimization algorithms for solving flexible flow shop scheduling problems

    Science.gov (United States)

    Buddala, Raviteja; Mahapatra, Siba Sankar

    2017-11-01

    Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.

  4. TOUGH Simulations of the Updegraff's Set of Fluid and Heat Flow Problems

    Energy Technology Data Exchange (ETDEWEB)

    Moridis, G.J.; Pruess (editor), K.

    1992-11-01

    The TOUGH code [Pruess, 1987] for two-phase flow of water, air, and heat in penneable media has been exercised on a suite of test problems originally selected and simulated by C. D. Updegraff [1989]. These include five 'verification' problems for which analytical or numerical solutions are available, and three 'validation' problems that model laboratory fluid and heat flow experiments. All problems could be run without any code modifications (*). Good and efficient numerical performance, as well as accurate results were obtained throughout. Additional code verification and validation problems from the literature are briefly summarized, and suggestions are given for proper applications of TOUGH and related codes.

  5. MULTICRITERIA HYBRID FLOW SHOP SCHEDULING PROBLEM: LITERATURE REVIEW, ANALYSIS, AND FUTURE RESEARCH

    Directory of Open Access Journals (Sweden)

    Marcia de Fatima Morais

    2014-12-01

    Full Text Available This research focuses on the Hybrid Flow Shop production scheduling problem, which is one of the most difficult problems to solve. The literature points to several studies that focus the Hybrid Flow Shop scheduling problem with monocriteria functions. Despite of the fact that, many real world problems involve several objective functions, they can often compete and conflict, leading researchers to concentrate direct their efforts on the development of methods that take consider this variant into consideration. The goal of the study is to review and analyze the methods in order to solve the Hybrid Flow Shop production scheduling problem with multicriteria functions in the literature. The analyses were performed using several papers that have been published over the years, also the parallel machines types, the approach used to develop solution methods, the type of method develop, the objective function, the performance criterion adopted, and the additional constraints considered. The results of the reviewing and analysis of 46 papers showed opportunities for future research on this topic, including the following: (i use uniform and dedicated parallel machines, (ii use exact and metaheuristics approaches, (iv develop lower and uppers bounds, relations of dominance and different search strategies to improve the computational time of the exact methods,  (v develop  other types of metaheuristic, (vi work with anticipatory setups, and (vii add constraints faced by the production systems itself.

  6. Discrete particle swarm optimization to solve multi-objective limited-wait hybrid flow shop scheduling problem

    Science.gov (United States)

    Santosa, B.; Siswanto, N.; Fiqihesa

    2018-04-01

    This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution

  7. Sample problem calculations related to two-phase flow transients in a PWR relief-piping network

    International Nuclear Information System (INIS)

    Shin, Y.W.; Wiedermann, A.H.

    1981-03-01

    Two sample problems related with the fast transients of water/steam flow in the relief line of a PWR pressurizer were calculated with a network-flow analysis computer code STAC (System Transient-Flow Analysis Code). The sample problems were supplied by EPRI and are designed to test computer codes or computational methods to determine whether they have the basic capability to handle the important flow features present in a typical relief line of a PWR pressurizer. It was found necessary to implement into the STAC code a number of additional boundary conditions in order to calculate the sample problems. This includes the dynamics of the fluid interface that is treated as a moving boundary. This report describes the methodologies adopted for handling the newly implemented boundary conditions and the computational results of the two sample problems. In order to demonstrate the accuracies achieved in the STAC code results, analytical solutions are also obtained and used as a basis for comparison

  8. An Analytical Model for Multilayer Well Production Evaluation to Overcome Cross-Flow Problem

    KAUST Repository

    Hakiki, Farizal; Wibowo, Aris T.; Rahmawati, Silvya D.; Yasutra, Amega; Sukarno, Pudjo

    2017-01-01

    One of the major concerns in a multi-layer system is that interlayer cross-flow may occur if reservoir fluids are produced from commingled layers that have unequal initial pressures. Reservoir would commonly have bigger average reservoir pressure (pore fluid pressure) as it goes deeper. The phenomenon is, however, not followed by the reservoir productivity or injectivity. The existence of reservoir with quite low average-pressure and high injectivity would tend experiencing the cross-flow problem. It is a phenomenon of fluid from bottom layer flowing into upper layer. It would strict upper-layer fluid to flow into wellbore. It is as if there is an injection treatment from bottom layer. The study deploys productivity index an approach parameter taking into account of cross-flow problem instead of injectivity index since it is a production well. The analytical study is to model the reservoir multilayer by addressing to avoid cross-flow problem. The analytical model employed hypothetical and real field data to test it. The scope of this study are: (a) Develop mathematical-based solution to determine the production rate from each layer; (b) Assess different scenarios to optimize production rate, those are: pump setting depth and performance of in-situ choke (ISC) installation. The ISC is acting as an inflow control device (ICD) alike that help to reduce cross-flow occurrence. This study employed macro program to write the code and develop the interface. Fast iterative procedure happens on solving the analytical model. Comparison results recognized that the mathematical-based solution shows a good agreement with the commercial software derived results.

  9. An Analytical Model for Multilayer Well Production Evaluation to Overcome Cross-Flow Problem

    KAUST Repository

    Hakiki, Farizal

    2017-10-17

    One of the major concerns in a multi-layer system is that interlayer cross-flow may occur if reservoir fluids are produced from commingled layers that have unequal initial pressures. Reservoir would commonly have bigger average reservoir pressure (pore fluid pressure) as it goes deeper. The phenomenon is, however, not followed by the reservoir productivity or injectivity. The existence of reservoir with quite low average-pressure and high injectivity would tend experiencing the cross-flow problem. It is a phenomenon of fluid from bottom layer flowing into upper layer. It would strict upper-layer fluid to flow into wellbore. It is as if there is an injection treatment from bottom layer. The study deploys productivity index an approach parameter taking into account of cross-flow problem instead of injectivity index since it is a production well. The analytical study is to model the reservoir multilayer by addressing to avoid cross-flow problem. The analytical model employed hypothetical and real field data to test it. The scope of this study are: (a) Develop mathematical-based solution to determine the production rate from each layer; (b) Assess different scenarios to optimize production rate, those are: pump setting depth and performance of in-situ choke (ISC) installation. The ISC is acting as an inflow control device (ICD) alike that help to reduce cross-flow occurrence. This study employed macro program to write the code and develop the interface. Fast iterative procedure happens on solving the analytical model. Comparison results recognized that the mathematical-based solution shows a good agreement with the commercial software derived results.

  10. On non-permutation solutions to some two machine flow shop scheduling problems

    NARCIS (Netherlands)

    V. Strusevich (Vitaly); P.J. Zwaneveld (Peter)

    1994-01-01

    textabstractIn this paper, we study two versions of the two machine flow shop scheduling problem, where schedule length is to be minimized. First, we consider the two machine flow shop with setup, processing, and removal times separated. It is shown that an optimal solution need not be a permutation

  11. Modeling and Solving the Liner Shipping Service Selection Problem

    DEFF Research Database (Denmark)

    Karsten, Christian Vad; Balakrishnan, Anant

    We address a tactical planning problem, the Liner Shipping Service Selection Problem (LSSSP), facing container shipping companies. Given estimated demand between various ports, the LSSSP entails selecting the best subset of non-simple cyclic sailing routes from a given pool of candidate routes...... to accurately model transshipment costs and incorporate routing policies such as maximum transit time, maritime cabotage rules, and operational alliances. Our hop-indexed arc flow model is smaller and easier to solve than path flow models. We outline a preprocessing procedure that exploits both the routing...... requirements and the hop limits to reduce problem size, and describe techniques to accelerate the solution procedure. We present computational results for realistic problem instances from the benchmark suite LINER-LIB....

  12. Thermodynamics, maximum power, and the dynamics of preferential river flow structures at the continental scale

    Directory of Open Access Journals (Sweden)

    A. Kleidon

    2013-01-01

    Full Text Available The organization of drainage basins shows some reproducible phenomena, as exemplified by self-similar fractal river network structures and typical scaling laws, and these have been related to energetic optimization principles, such as minimization of stream power, minimum energy expenditure or maximum "access". Here we describe the organization and dynamics of drainage systems using thermodynamics, focusing on the generation, dissipation and transfer of free energy associated with river flow and sediment transport. We argue that the organization of drainage basins reflects the fundamental tendency of natural systems to deplete driving gradients as fast as possible through the maximization of free energy generation, thereby accelerating the dynamics of the system. This effectively results in the maximization of sediment export to deplete topographic gradients as fast as possible and potentially involves large-scale feedbacks to continental uplift. We illustrate this thermodynamic description with a set of three highly simplified models related to water and sediment flow and describe the mechanisms and feedbacks involved in the evolution and dynamics of the associated structures. We close by discussing how this thermodynamic perspective is consistent with previous approaches and the implications that such a thermodynamic description has for the understanding and prediction of sub-grid scale organization of drainage systems and preferential flow structures in general.

  13. Solving implicit multi-mesh flow and conjugate heat transfer problems with RELAP-7

    International Nuclear Information System (INIS)

    Zou, L.; Peterson, J.; Zhao, H.; Zhang, H.; Andrs, D.; Martineau, R.

    2013-01-01

    The fully implicit simulation capability of RELAP-7 to solve multi-mesh flow and conjugate heat transfer problems for reactor system safety analysis is presented. Compared to general single-mesh simulations, the reactor system safety analysis-type of code has unique challenges due to its highly simplified, interconnected, one-dimensional, and zero-dimensional flow network describing multiple physics with significantly different time and length scales. To use the Jacobian-free Newton Krylov-type of solver, preconditioning is generally required for the Krylov method. The uniqueness of the reactor safety analysis-type of code in treating the interconnected flow network and conjugate heat transfer also introduces challenges in providing preconditioning matrix. Typical flow and conjugate heat transfer problems involved in reactor safety analysis using RELAP-7, as well as the special treatment on the preconditioning matrix are presented in detail. (authors)

  14. Flow Formulation-based Model for the Curriculum-based Course Timetabling Problem

    DEFF Research Database (Denmark)

    Bagger, Niels-Christian Fink; Kristiansen, Simon; Sørensen, Matias

    2015-01-01

    problem. This decreases the number of integer variables signicantly and improves the performance compared to the basic formulation. It also shows competitiveness with other approaches based on mixed integer programming from the literature and improves the currently best known lower bound on one data...... instance in the benchmark data set from the second international timetabling competition.......In this work we will present a new mixed integer programming formulation for the curriculum-based course timetabling problem. We show that the model contains an underlying network model by dividing the problem into two models and then connecting the two models back into one model using a maximum ow...

  15. To the elementary theory of critical (maximum) flow rate of two-phase mixture in channels with various sections

    International Nuclear Information System (INIS)

    Nigmatulin, B.I.; Soplenkov, K.I.

    1978-01-01

    On the basis of the concepts of two-phase dispersive flow with various structures (bubble, vapour-drop etc) in the framework of the two-speed and two-temperature one-dimension stationary model of the current with provision for phase transitions the conditions, under which a critical (maximum) flow rate of two-phase mixture is achieved during its outflowing from a channel with the pre-set geometry, have been determined. It is shown, that for the choosen set of two-phase flow equations with the known parameters of deceleration and structure one of the critical conditions is satisfied: either solution of the set of equations corresponding a critical flow rate is a special one, i.e. passes through a special point locating between minimum and outlet channel sections where the carrying phase velocity approaches the value of decelerated sound speed in the mixture or the determinator of the initial set of equations equals zero for the outlet channel sections, i.e. gradients of the main flow parameters tend to +-infinity in this section, and carrying phase velocity also approaches the value of the decelerated sound velocity in the mixture

  16. Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem

    Science.gov (United States)

    Man, J.; Li, W.; Zeng, L.; Wu, L.

    2015-12-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.

  17. Optimal Results and Numerical Simulations for Flow Shop Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Tao Ren

    2012-01-01

    Full Text Available This paper considers the m-machine flow shop problem with two objectives: makespan with release dates and total quadratic completion time, respectively. For Fm|rj|Cmax, we prove the asymptotic optimality for any dense scheduling when the problem scale is large enough. For Fm‖ΣCj2, improvement strategy with local search is presented to promote the performance of the classical SPT heuristic. At the end of the paper, simulations show the effectiveness of the improvement strategy.

  18. Existence and uniqueness of solution for a model problem of transonic flow

    International Nuclear Information System (INIS)

    Tangmanee, S.

    1985-11-01

    A model problem of transonic flow ''the Tricomi equation'' bounded by the rectangular-curve boundary is studied. We transform the model problem into a symmetric positive system and an admissible boundary condition is posed. We show that with some conditions the existence and uniqueness of the solution are guaranteed. (author)

  19. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  20. Mixed hybrid finite elements and streamline computation for the potential flow problem

    NARCIS (Netherlands)

    Kaasschieter, E.F.; Huijben, A.J.M.

    1992-01-01

    An important class of problems in mathematical physics involves equations of the form -¿ · (A¿¿) = f. In a variety of problems it is desirable to obtain an accurate approximation of the flow quantity u = -A¿¿. Such an accurate approximation can be determined by the mixed finite element method. In

  1. Accelerated solution of non-linear flow problems using Chebyshev iteration polynomial based RK recursions

    Energy Technology Data Exchange (ETDEWEB)

    Lorber, A.A.; Carey, G.F.; Bova, S.W.; Harle, C.H. [Univ. of Texas, Austin, TX (United States)

    1996-12-31

    The connection between the solution of linear systems of equations by iterative methods and explicit time stepping techniques is used to accelerate to steady state the solution of ODE systems arising from discretized PDEs which may involve either physical or artificial transient terms. Specifically, a class of Runge-Kutta (RK) time integration schemes with extended stability domains has been used to develop recursion formulas which lead to accelerated iterative performance. The coefficients for the RK schemes are chosen based on the theory of Chebyshev iteration polynomials in conjunction with a local linear stability analysis. We refer to these schemes as Chebyshev Parameterized Runge Kutta (CPRK) methods. CPRK methods of one to four stages are derived as functions of the parameters which describe an ellipse {Epsilon} which the stability domain of the methods is known to contain. Of particular interest are two-stage, first-order CPRK and four-stage, first-order methods. It is found that the former method can be identified with any two-stage RK method through the correct choice of parameters. The latter method is found to have a wide range of stability domains, with a maximum extension of 32 along the real axis. Recursion performance results are presented below for a model linear convection-diffusion problem as well as non-linear fluid flow problems discretized by both finite-difference and finite-element methods.

  2. Finite element flow analysis; Proceedings of the Fourth International Symposium on Finite Element Methods in Flow Problems, Chuo University, Tokyo, Japan, July 26-29, 1982

    Science.gov (United States)

    Kawai, T.

    Among the topics discussed are the application of FEM to nonlinear free surface flow, Navier-Stokes shallow water wave equations, incompressible viscous flows and weather prediction, the mathematical analysis and characteristics of FEM, penalty function FEM, convective, viscous, and high Reynolds number FEM analyses, the solution of time-dependent, three-dimensional and incompressible Navier-Stokes equations, turbulent boundary layer flow, FEM modeling of environmental problems over complex terrain, and FEM's application to thermal convection problems and to the flow of polymeric materials in injection molding processes. Also covered are FEMs for compressible flows, including boundary layer flows and transonic flows, hybrid element approaches for wave hydrodynamic loadings, FEM acoustic field analyses, and FEM treatment of free surface flow, shallow water flow, seepage flow, and sediment transport. Boundary element methods and FEM computational technique topics are also discussed. For individual items see A84-25834 to A84-25896

  3. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  4. Hand grip strength and maximum peak expiratory flow: determinants of bone mineral density of adolescent students.

    Science.gov (United States)

    Cossio-Bolaños, Marco; Lee-Andruske, Cynthia; de Arruda, Miguel; Luarte-Rocha, Cristian; Almonacid-Fierro, Alejandro; Gómez-Campos, Rossana

    2018-03-02

    Maintaining and building healthy bones during the lifetime requires a complicated interaction between a number of physiological and lifestyle factors. Our goal of this study was to analyze the association between hand grip strength and the maximum peak expiratory flow with bone mineral density and content in adolescent students. The research team studied 1427 adolescent students of both sexes (750 males and 677 females) between the ages of 11.0 and 18.9 years in the Maule Region of Talca (Chile). Weight, standing height, sitting height, hand grip strength (HGS), and maximum peak expiratory flow (PEF) were measured. Furthermore, bone mineral density (BMD) and total body bone mineral content (BMC) were determined by using the Dual-Energy X-Ray Absorptiometry (DXA). Hand grip strength and PEF were categorized in tertiles (lowest, middle, and highest). Linear regression was performed in steps to analyze the relationship between the variables. Differences between categories were determined through ANOVA. In males, the hand grip strength explained 18-19% of the BMD and 20-23% of the BMC. For the females, the percentage of variation occurred between 12 and 13% of the BMD and 17-18% of the BMC. The variation of PEF for the males was observed as 33% of the BMD and 36% of the BMC. For the females, both the BMD and BMC showed a variation of 19%. The HGS and PEF were divided into three categories (lowest, middle, and highest). In both cases, significant differences occurred in bone density health between the three categories. In conclusion, the HGS and the PEF related positively to the bone density health of both sexes of adolescent students. The adolescents with poor values for hand grip strength and expiratory flow showed reduced values of BMD and BMC for the total body. Furthermore, the PEF had a greater influence on bone density health with respect to the HGS of the adolescents of both sexes.

  5. Approximation and hardness results for the maximum edge q-coloring problem

    DEFF Research Database (Denmark)

    Adamaszek, Anna Maria; Popa, Alexandru

    2016-01-01

    We consider the problem of coloring edges of a graph subject to the following constraints: for every vertex v, all the edges incident with v have to be colored with at most q colors. The goal is to find a coloring satisfying the above constraints and using the maximum number of colors. Notice...... ϵ>0 and any q≥2 assuming the unique games conjecture (UGC), or 1+−ϵ for any ϵ>0 and any q≥3 (≈1.19 for q=2) assuming P≠NP. These results hold even when the considered graphs are bipartite. On the algorithmic side, we restrict to the case q=2, since this is the most important in practice and we show...... a 5/3-approximation algorithm for graphs which have a perfect matching....

  6. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  7. Grid dependency of wall heat transfer for simulation of natural convection flow problems

    NARCIS (Netherlands)

    Loomans, M.G.L.C.; Seppänen, O.; Säteri, J.

    2007-01-01

    In the indoor environment natural convection is a well known air flow phenomenon. In numerical simulations applying the CFD technique it is also known as a flow problem that is difficult to solve. Alternatives are available to overcome the limitations of the default approach (standard k-e model with

  8. Permutation flow-shop scheduling problem to optimize a quadratic objective function

    Science.gov (United States)

    Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu

    2017-09-01

    A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.

  9. New scheduling rules for a dynamic flexible flow line problem with sequence-dependent setup times

    Science.gov (United States)

    Kia, Hamidreza; Ghodsypour, Seyed Hassan; Davoudpour, Hamid

    2017-09-01

    In the literature, the application of multi-objective dynamic scheduling problem and simple priority rules are widely studied. Although these rules are not efficient enough due to simplicity and lack of general insight, composite dispatching rules have a very suitable performance because they result from experiments. In this paper, a dynamic flexible flow line problem with sequence-dependent setup times is studied. The objective of the problem is minimization of mean flow time and mean tardiness. A 0-1 mixed integer model of the problem is formulated. Since the problem is NP-hard, four new composite dispatching rules are proposed to solve it by applying genetic programming framework and choosing proper operators. Furthermore, a discrete-event simulation model is made to examine the performances of scheduling rules considering four new heuristic rules and the six adapted heuristic rules from the literature. It is clear from the experimental results that composite dispatching rules that are formed from genetic programming have a better performance in minimization of mean flow time and mean tardiness than others.

  10. Weakly and strongly polynomial algorithms for computing the maximum decrease in uniform arc capacities

    Directory of Open Access Journals (Sweden)

    Ghiyasvand Mehdi

    2016-01-01

    Full Text Available In this paper, a new problem on a directed network is presented. Let D be a feasible network such that all arc capacities are equal to U. Given a t > 0, the network D with arc capacities U - t is called the t-network. The goal of the problem is to compute the largest t such that the t-network is feasible. First, we present a weakly polynomial time algorithm to solve this problem, which runs in O(log(nU maximum flow computations, where n is the number of nodes. Then, an O(m2n time approach is presented, where m is the number of arcs. Both the weakly and strongly polynomial algorithms are inspired by McCormick and Ervolina (1994.

  11. Managing the Budget: Stock-Flow Reasoning and the CO2 Accumulation Problem.

    Science.gov (United States)

    Newell, Ben R; Kary, Arthur; Moore, Chris; Gonzalez, Cleotilde

    2016-01-01

    The majority of people show persistent poor performance in reasoning about "stock-flow problems" in the laboratory. An important example is the failure to understand the relationship between the "stock" of CO2 in the atmosphere, the "inflow" via anthropogenic CO2 emissions, and the "outflow" via natural CO2 absorption. This study addresses potential causes of reasoning failures in the CO2 accumulation problem and reports two experiments involving a simple re-framing of the task as managing an analogous financial (rather than CO2 ) budget. In Experiment 1 a financial version of the task that required participants to think in terms of controlling debt demonstrated significant improvements compared to a standard CO2 accumulation problem. Experiment 2, in which participants were invited to think about managing savings, suggested that this improvement was fortuitous and coincidental rather than due to a fundamental change in understanding the stock-flow relationships. The role of graphical information in aiding or abetting stock-flow reasoning was also explored in both experiments, with the results suggesting that graphs do not always assist understanding. The potential for leveraging the kind of reasoning exhibited in such tasks in an effort to change people's willingness to reduce CO2 emissions is briefly discussed. Copyright © 2015 Cognitive Science Society, Inc.

  12. Parallel patterns determination in solving cyclic flow shop problem with setups

    Directory of Open Access Journals (Sweden)

    Bożejko Wojciech

    2017-06-01

    Full Text Available The subject of this work is the new idea of blocks for the cyclic flow shop problem with setup times, using multiple patterns with different sizes determined for each machine constituting optimal schedule of cities for the traveling salesman problem (TSP. We propose to take advantage of the Intel Xeon Phi parallel computing environment during so-called ’blocks’ determination basing on patterns, in effect significantly improving the quality of obtained results.

  13. Scheduling stochastic two-machine flow shop problems to minimize expected makespan

    Directory of Open Access Journals (Sweden)

    Mehdi Heydari

    2013-07-01

    Full Text Available During the past few years, despite tremendous contribution on deterministic flow shop problem, there are only limited number of works dedicated on stochastic cases. This paper examines stochastic scheduling problems in two-machine flow shop environment for expected makespan minimization where processing times of jobs are normally distributed. Since jobs have stochastic processing times, to minimize the expected makespan, the expected sum of the second machine’s free times is minimized. In other words, by minimization waiting times for the second machine, it is possible to reach the minimum of the objective function. A mathematical method is proposed which utilizes the properties of the normal distributions. Furthermore, this method can be used as a heuristic method for other distributions, as long as the means and variances are available. The performance of the proposed method is explored using some numerical examples.

  14. Identifying the optimal HVOF spray parameters to attain minimum porosity and maximum hardness in iron based amorphous metallic coatings

    Directory of Open Access Journals (Sweden)

    S. Vignesh

    2017-04-01

    Full Text Available Flow based Erosion – corrosion problems are very common in fluid handling equipments such as propellers, impellers, pumps in warships, submarine. Though there are many coating materials available to combat erosion–corrosion damage in the above components, iron based amorphous coatings are considered to be more effective to combat erosion–corrosion problems. High velocity oxy-fuel (HVOF spray process is considered to be a better process to coat the iron based amorphous powders. In this investigation, iron based amorphous metallic coating was developed on 316 stainless steel substrate using HVOF spray technique. Empirical relationships were developed to predict the porosity and micro hardness of iron based amorphous coating incorporating HVOF spray parameters such as oxygen flow rate, fuel flow rate, powder feed rate, carrier gas flow rate, and spray distance. Response surface methodology (RSM was used to identify the optimal HVOF spray parameters to attain coating with minimum porosity and maximum hardness.

  15. The collapsing of multigroup cross sections in optimization problems solved by means of the Pontryagin maximum principle in nuclear reactor dynamics

    International Nuclear Information System (INIS)

    Anton, V.

    1979-12-01

    The collapsing formulae for the optimization problems solved by means of the Pontryagin maximum principle in nuclear reactor dynamics are presented. A comparison with the corresponding formulae of the static case is given too. (author)

  16. Heuristics methods for the flow shop scheduling problem with separated setup times

    Directory of Open Access Journals (Sweden)

    Marcelo Seido Nagano

    2012-06-01

    Full Text Available This paper deals with the permutation flow shop scheduling problem with separated machine setup times. As a result of an investigation on the problem characteristics, four heuristics methods are proposed with procedures of the construction sequencing solution by an analogy with the asymmetric traveling salesman problem with the objective of minimizing makespan. Experimental results show that one of the new heuristics methods proposed provide high quality solutions in comparisons with the evaluated methods considered in the literature.

  17. A Special Class of Univalent Functions in Hele-Shaw Flow Problems

    Directory of Open Access Journals (Sweden)

    Paula Curt

    2011-01-01

    Full Text Available We study the time evolution of the free boundary of a viscous fluid for planar flows in Hele-Shaw cells under injection. Applying methods from the theory of univalent functions, we prove the invariance in time of Φ-likeness property (a geometric property which includes starlikeness and spiral-likeness for two basic cases: the inner problem and the outer problem. We study both zero and nonzero surface tension models. Certain particular cases are also presented.

  18. Robust numerical methods for boundary-layer equations for a model problem of flow over a symmetric curved surface

    NARCIS (Netherlands)

    A.R. Ansari; B. Hossain; B. Koren (Barry); G.I. Shishkin (Gregori)

    2007-01-01

    textabstractWe investigate the model problem of flow of a viscous incompressible fluid past a symmetric curved surface when the flow is parallel to its axis. This problem is known to exhibit boundary layers. Also the problem does not have solutions in closed form, it is modelled by boundary-layer

  19. A New Artificial Immune System Algorithm for Multiobjective Fuzzy Flow Shop Problems

    Directory of Open Access Journals (Sweden)

    Cengiz Kahraman

    2009-12-01

    Full Text Available In this paper a new artificial immune system (AIS algorithm is proposed to solve multi objective fuzzy flow shop scheduling problems. A new mutation operator is also described for this AIS. Fuzzy sets are used to model processing times and due dates. The objectives are to minimize the average tardiness and the number of tardy jobs. The developed new AIS algorithm is tested on real world data collected at an engine cylinder liner manufacturing process. The feasibility and effectiveness of the proposed AIS is demonstrated by comparing it with genetic algorithms. Computational results demonstrate that the proposed AIS algorithm is more effective meta-heuristic for multi objective flow shop scheduling problems with fuzzy processing time and due date.

  20. An analytical solution to the heat transfer problem in thick-walled hunt flow

    International Nuclear Information System (INIS)

    Bluck, Michael J; Wolfendale, Michael J

    2017-01-01

    Highlights: • Convective heat transfer in Hunt type flow of a liquid metal in a rectangular duct. • Analytical solution to the H1 constant peripheral temperature in a rectangular duct. • New H1 result demonstrating the enhancement of heat transfer due to flow distortion by the applied magnetic field. • Analytical solution to the H2 constant peripheral heat flux in a rectangular duct. • New H2 result demonstrating the reduction of heat transfer due to flow distortion by the applied magnetic field. • Results are important for validation of CFD in magnetohydrodynamics and for implementation of systems code approaches. - Abstract: The flow of a liquid metal in a rectangular duct, subject to a strong transverse magnetic field is of interest in a number of applications. An important application of such flows is in the context of coolants in fusion reactors, where heat is transferred to a lead-lithium eutectic. It is vital, therefore, that the heat transfer mechanisms are understood. Forced convection heat transfer is strongly dependent on the flow profile. In the hydrodynamic case, Nusselt numbers and the like, have long been well characterised in duct geometries. In the case of liquid metals in strong magnetic fields (magnetohydrodynamics), the flow profiles are very different and one can expect a concomitant effect on convective heat transfer. For fully developed laminar flows, the magnetohydrodynamic problem can be characterised in terms of two coupled partial differential equations. The problem of heat transfer for perfectly electrically insulating boundaries (Shercliff case) has been studied previously (Bluck et al., 2015). In this paper, we demonstrate corresponding analytical solutions for the case of conducting hartmann walls of arbitrary thickness. The flow is very different from the Shercliff case, exhibiting jets near the side walls and core flow suppression which have profound effects on heat transfer.

  1. Study of the Riemann problem and construction of multidimensional Godunov-type schemes for two-phase flow models

    International Nuclear Information System (INIS)

    Toumi, I.

    1990-04-01

    This thesis is devoted to the study of the Riemann problem and the construction of Godunov type numerical schemes for one or two dimensional two-phase flow models. In the first part, we study the Riemann problem for the well-known Drift-Flux, model which has been widely used for the analysis of thermal hydraulics transients. Then we use this study to construct approximate Riemann solvers and we describe the corresponding Godunov type schemes for simplified equation of state. For computation of complex two-phase flows, a weak formulation of Roe's approximate Riemann solver, which gives a method to construct a Roe-averaged jacobian matrix with a general equation of state, is proposed. For two-dimensional flows, the developed methods are based upon an approximate solver for a two-dimensional Riemann problem, according to Harten-Lax-Van Leer principles. The numerical results for standard test problems show the good behaviour of these numerical schemes for a wide range of flow conditions [fr

  2. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  3. A discrete firefly meta-heuristic with local search for makespan minimization in permutation flow shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Nader Ghaffari-Nasab

    2010-07-01

    Full Text Available During the past two decades, there have been increasing interests on permutation flow shop with different types of objective functions such as minimizing the makespan, the weighted mean flow-time etc. The permutation flow shop is formulated as a mixed integer programming and it is classified as NP-Hard problem. Therefore, a direct solution is not available and meta-heuristic approaches need to be used to find the near-optimal solutions. In this paper, we present a new discrete firefly meta-heuristic to minimize the makespan for the permutation flow shop scheduling problem. The results of implementation of the proposed method are compared with other existing ant colony optimization technique. The preliminary results indicate that the new proposed method performs better than the ant colony for some well known benchmark problems.

  4. Study of flow over object problems by a nodal discontinuous Galerkin-lattice Boltzmann method

    Science.gov (United States)

    Wu, Jie; Shen, Meng; Liu, Chen

    2018-04-01

    The flow over object problems are studied by a nodal discontinuous Galerkin-lattice Boltzmann method (NDG-LBM) in this work. Different from the standard lattice Boltzmann method, the current method applies the nodal discontinuous Galerkin method into the streaming process in LBM to solve the resultant pure convection equation, in which the spatial discretization is completed on unstructured grids and the low-storage explicit Runge-Kutta scheme is used for time marching. The present method then overcomes the disadvantage of standard LBM for depending on the uniform meshes. Moreover, the collision process in the LBM is completed by using the multiple-relaxation-time scheme. After the validation of the NDG-LBM by simulating the lid-driven cavity flow, the simulations of flows over a fixed circular cylinder, a stationary airfoil and rotating-stationary cylinders are performed. Good agreement of present results with previous results is achieved, which indicates that the current NDG-LBM is accurate and effective for flow over object problems.

  5. Two phase flow problems in power station boilers

    International Nuclear Information System (INIS)

    Firman, E.C.

    1974-01-01

    The paper outlines some of the waterside thermal and hydrodynamic phenomena relating to design and operation of large boilers in central power stations. The associated programme of work is described with an outline of some results already obtained. By way of introduction, the principal features of conventional and nuclear drum boilers and once-through nuclear heat exchangers are described in so far as they pertain to this area of work. This is followed by discussion of the relevant physical phenomena and problems which arise. For example, the problem of steam entrainment from the drum into the tubes connecting it to the furnace wall tubes is related to its effects on circulation and possible mechanisms of tube failure. Other problems concern the transient associated with start-up or low load operation of plant. The requirement for improved mathematical representation of steady and dynamic performance is mentioned together with the corresponding need for data on heat transfer, pressure loss, hydrodynamic stability, consequences of deposits, etc. The paper concludes with reference to the work being carried out within the C.E.G.B. in relation to the above problems. The facilities employed and the specific studies being made on them are described: these range from field trials on operational boilers to small scale laboratory investigations of underlying two phase flow mechanisms and include high pressure water rigs and a freon rig for simulation studies

  6. Description of internal flow problems by a boundary integral method with dipole panels

    International Nuclear Information System (INIS)

    Krieg, R.; Hailfinger, G.

    1979-01-01

    In reactor safety studies the failure of single components is postulated or sudden accident loadings are assumed and the consequences are investigated. Often as a first consequence highly transient three dimensional flow problems occur. In contrast to classical flow problems, in most of the above cases the fluid velocities are relatively small whereas the accelerations assume high values. As a consequence both, viscosity effects and dynamic pressures which are proportional to the square of the fluid velocities are usually negligible. For cases, where the excitation times are considerably longer than the times necessary for a wave to traverse characteristic regions of the fluid field, also the fluid compressibility is negligible. Under these conditions boundary integral methods are an appropriate tool to deal with the problem. Flow singularities are distributed over the fluid boundaries in such a way that pressure and velocity fields are obtained which satisfy the boundary conditions. In order to facilitate the numerical treatment the fluid boundaries are approximated by a finite number of panels with uniform singularity distributions on each of them. Consequently the pressure and velocity field of the given problem may be obtained by superposition of the corresponding fields due to these panels with their singularity intensities as unknown factors. Then satisfying the boundary conditions in so many boundary points as panels have been introduced, yields a system of linear equations which in general allows for a unique determination of the unknown intensities. (orig./RW)

  7. Solving no-wait two-stage flexible flow shop scheduling problem with unrelated parallel machines and rework time by the adjusted discrete Multi Objective Invasive Weed Optimization and fuzzy dominance approach

    Energy Technology Data Exchange (ETDEWEB)

    Jafarzadeh, Hassan; Moradinasab, Nazanin; Gerami, Ali

    2017-07-01

    Adjusted discrete Multi-Objective Invasive Weed Optimization (DMOIWO) algorithm, which uses fuzzy dominant approach for ordering, has been proposed to solve No-wait two-stage flexible flow shop scheduling problem. Design/methodology/approach: No-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times and probable rework in both stations, different ready times for all jobs and rework times for both stations as well as unrelated parallel machines with regards to the simultaneous minimization of maximum job completion time and average latency functions have been investigated in a multi-objective manner. In this study, the parameter setting has been carried out using Taguchi Method based on the quality indicator for beater performance of the algorithm. Findings: The results of this algorithm have been compared with those of conventional, multi-objective algorithms to show the better performance of the proposed algorithm. The results clearly indicated the greater performance of the proposed algorithm. Originality/value: This study provides an efficient method for solving multi objective no-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times, probable rework in both stations, different ready times for all jobs, rework times for both stations and unrelated parallel machines which are the real constraints.

  8. Solving no-wait two-stage flexible flow shop scheduling problem with unrelated parallel machines and rework time by the adjusted discrete Multi Objective Invasive Weed Optimization and fuzzy dominance approach

    International Nuclear Information System (INIS)

    Jafarzadeh, Hassan; Moradinasab, Nazanin; Gerami, Ali

    2017-01-01

    Adjusted discrete Multi-Objective Invasive Weed Optimization (DMOIWO) algorithm, which uses fuzzy dominant approach for ordering, has been proposed to solve No-wait two-stage flexible flow shop scheduling problem. Design/methodology/approach: No-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times and probable rework in both stations, different ready times for all jobs and rework times for both stations as well as unrelated parallel machines with regards to the simultaneous minimization of maximum job completion time and average latency functions have been investigated in a multi-objective manner. In this study, the parameter setting has been carried out using Taguchi Method based on the quality indicator for beater performance of the algorithm. Findings: The results of this algorithm have been compared with those of conventional, multi-objective algorithms to show the better performance of the proposed algorithm. The results clearly indicated the greater performance of the proposed algorithm. Originality/value: This study provides an efficient method for solving multi objective no-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times, probable rework in both stations, different ready times for all jobs, rework times for both stations and unrelated parallel machines which are the real constraints.

  9. A service flow model for the liner shipping network design problem

    DEFF Research Database (Denmark)

    Plum, Christian Edinger Munk; Pisinger, David; Sigurd, Mikkel M.

    2014-01-01

    . The formulation alleviates issues faced by arc flow formulations with regards to handling multiple calls to the same port. A problem which has not been fully dealt with earlier by LSNDP formulations. Multiple calls are handled by introducing service nodes, together with port nodes in a graph representation...... of the network and a penalty for not flowed cargo. The model can be used to design liner shipping networks to utilize a container carrier’s assets efficiently and to investigate possible scenarios of changed market conditions. The model is solved as a Mixed Integer Program. Results are presented for the two...

  10. From "E-flows" to "Sed-flows": Managing the Problem of Sediment in High Altitude Hydropower Systems

    Science.gov (United States)

    Gabbud, C.; Lane, S. N.

    2017-12-01

    The connections between stream hydraulics, geomorphology and ecosystems in mountain rivers have been substantially perturbed by humans, for example through flow regulation related to hydropower activities. It is well known that the ecosystem impacts downstream of hydropower dams may be managed by a properly designed compensation release or environmental flows ("e-flows"), and such flows may also include sediment considerations (e.g. to break up bed armor). However, there has been much less attention given to the ecosystem impacts of water intakes (where water is extracted and transferred for storage and/or power production), even though in many mountain systems such intakes may be prevalent. Flow intakes tend to be smaller than dams and because they fill quickly in the presence of sediment delivery, they often need to be flushed, many times within a day in Alpine glaciated catchments with high sediment yields. The associated short duration "flood" flow is characterised by very high sediment concentrations, which may drastically modify downstream habitat, both during the floods but also due to subsequent accumulation of "legacy" sediment. The impacts on flora and fauna of these systems have not been well studied. In addition, there are no guidelines established that might allow the design of "e-flows" that also treat this sediment problem, something we call "sed-flows". Through an Alpine field example, we quantify the hydrological, geomorphological, and ecosystem impacts of Alpine water transfer systems. The high sediment concentrations of these flushing flows lead to very high rates of channel disturbance downstream, superimposed upon long-term and progressive bed sediment accumulation. Monthly macroinvertebrate surveys over almost a two-year period showed that reductions in the flushing rate reduced rates of disturbance substantially, and led to rapid macroinvertebrate recovery, even in the seasons (autumn and winter) when biological activity should be reduced

  11. Finite element approximation to a model problem of transonic flow

    International Nuclear Information System (INIS)

    Tangmanee, S.

    1986-12-01

    A model problem of transonic flow ''the Tricomi equation'' in Ω is contained in IR 2 bounded by the rectangular-curve boundary is posed in the form of symmetric positive differential equations. The finite element method is then applied. When the triangulation of Ω-bar is made of quadrilaterals and the approximation space is the Lagrange polynomial, we get the error estimates. 14 refs, 1 fig

  12. Dynamic Optimization of a Polymer Flooding Process Based on Implicit Discrete Maximum Principle

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2012-01-01

    Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and some inequality constraints as polymer concentration and injection amount limitation. The optimal control model is discretized by full implicit finite-difference method. To cope with the discrete optimal control problem (OCP, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s discrete maximum principle. A modified gradient method with new adjoint construction is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.

  13. A Local Search Algorithm for the Flow Shop Scheduling Problem with Release Dates

    Directory of Open Access Journals (Sweden)

    Tao Ren

    2015-01-01

    Full Text Available This paper discusses the flow shop scheduling problem to minimize the makespan with release dates. By resequencing the jobs, a modified heuristic algorithm is obtained for handling large-sized problems. Moreover, based on some properties, a local search scheme is provided to improve the heuristic to gain high-quality solution for moderate-sized problems. A sequence-independent lower bound is presented to evaluate the performance of the algorithms. A series of simulation results demonstrate the effectiveness of the proposed algorithms.

  14. A network flow model for load balancing in circuit-switched multicomputers

    Science.gov (United States)

    Bokhari, Shahid H.

    1990-01-01

    In multicomputers that utilize circuit switching or wormhole routing, communication overhead depends largely on link contention - the variation due to distance between nodes is negligible. This has a major impact on the load balancing problem. In this case, there are some nodes with excess load (sources) and others with deficit load (sinks) and it is required to find a matching of sources to sinks that avoids contention. The problem is made complex by the hardwired routing on currently available machines: the user can control only which nodes communicate but not how the messages are routed. Network flow models of message flow in the mesh and the hypercube were developed to solve this problem. The crucial property of these models is the correspondence between minimum cost flows and correctly routed messages. To solve a given load balancing problem, a minimum cost flow algorithm is applied to the network. This permits one to determine efficiently a maximum contention free matching of sources to sinks which, in turn, tells one how much of the given imbalance can be eliminated without contention.

  15. The Planar Sandwich and Other 1D Planar Heat Flow Test Problems in ExactPack

    Energy Technology Data Exchange (ETDEWEB)

    Singleton, Jr., Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-24

    This report documents the implementation of several related 1D heat flow problems in the verification package ExactPack [1]. In particular, the planar sandwich class defined in Ref. [2], as well as the classes PlanarSandwichHot, PlanarSandwichHalf, and other generalizations of the planar sandwich problem, are defined and documented here. A rather general treatment of 1D heat flow is presented, whose main results have been implemented in the class Rod1D. All planar sandwich classes are derived from the parent class Rod1D.

  16. A Bee Colony Optimization Approach for Mixed Blocking Constraints Flow Shop Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Mostafa Khorramizadeh

    2015-01-01

    Full Text Available The flow shop scheduling problems with mixed blocking constraints with minimization of makespan are investigated. The Taguchi orthogonal arrays and path relinking along with some efficient local search methods are used to develop a metaheuristic algorithm based on bee colony optimization. In order to compare the performance of the proposed algorithm, two well-known test problems are considered. Computational results show that the presented algorithm has comparative performance with well-known algorithms of the literature, especially for the large sized problems.

  17. Asymptotic scalings of developing curved pipe flow

    Science.gov (United States)

    Ault, Jesse; Chen, Kevin; Stone, Howard

    2015-11-01

    Asymptotic velocity and pressure scalings are identified for the developing curved pipe flow problem in the limit of small pipe curvature and high Reynolds numbers. The continuity and Navier-Stokes equations in toroidal coordinates are linearized about Dean's analytical curved pipe flow solution (Dean 1927). Applying appropriate scaling arguments to the perturbation pressure and velocity components and taking the limits of small curvature and large Reynolds number yields a set of governing equations and boundary conditions for the perturbations, independent of any Reynolds number and pipe curvature dependence. Direct numerical simulations are used to confirm these scaling arguments. Fully developed straight pipe flow is simulated entering a curved pipe section for a range of Reynolds numbers and pipe-to-curvature radius ratios. The maximum values of the axial and secondary velocity perturbation components along with the maximum value of the pressure perturbation are plotted along the curved pipe section. The results collapse when the scaling arguments are applied. The numerically solved decay of the velocity perturbation is also used to determine the entrance/development lengths for the curved pipe flows, which are shown to scale linearly with the Reynolds number.

  18. Performance of a vanadium redox flow battery with and without flow fields

    International Nuclear Information System (INIS)

    Xu, Q.; Zhao, T.S.; Zhang, C.

    2014-01-01

    Highlights: • The performances of a VRFB with/without flow fields are compared. • The respective maximum power efficiency occurs at different flow rates. • The battery with flow fields Exhibits 5% higher energy efficiency. - Abstract: A flow field is an indispensable component for fuel cells to macroscopically distribute reactants onto electrodes. However, it is still unknown whether flow fields are also required in all-vanadium redox flow batteries (VRFBs). In this work, the performance of a VRFB with flow fields is analyzed and compared with the performance of a VRFB without flow fields. It is demonstrated that the battery with flow fields has a higher discharge voltage at higher flow rates, but exhibits a larger pressure drop. The maximum power-based efficiency occurs at different flow rates for the both batteries with and without flow fields. It is found that the battery with flow fields Exhibits 5% higher energy efficiency than the battery without flow fields, when operating at the flow rates corresponding to each battery's maximum power-based efficiency. Therefore, the inclusion of flow fields in VRFBs can be an effective approach for improving system efficiency

  19. A filtering technique for solving the advection equation in two-phase flow problems

    International Nuclear Information System (INIS)

    Devals, C.; Heniche, M.; Bertrand, F.; Tanguy, P.A.; Hayes, R.E.

    2004-01-01

    The aim of this work is to develop a numerical strategy for the simulation of two-phase flow in the context of chemical engineering applications. The finite element method has been chosen because of its flexibility to deal with complex geometries. One of the key points of two-phase flow simulation is to determine precisely the position of the interface between the two phases, which is an unknown of the problem. In this case, the interface can be tracked by the advection of the so-called color function. It is well known that the solution of the advection equation by most numerical schemes, including the Streamline Upwind Petrov-Galerkin (SUPG) method, may exhibit spurious oscillations. This work proposes an approach to filter out these oscillations by means of a change of variable that is efficient for both steady state and transient cases. First, the filtering technique will be presented in detail. Then, it will be applied to two-dimensional benchmark problems, namely, the advection skew to the mesh and the Zalesak's problems. (author)

  20. A New Spectral Local Linearization Method for Nonlinear Boundary Layer Flow Problems

    Directory of Open Access Journals (Sweden)

    S. S. Motsa

    2013-01-01

    Full Text Available We propose a simple and efficient method for solving highly nonlinear systems of boundary layer flow problems with exponentially decaying profiles. The algorithm of the proposed method is based on an innovative idea of linearizing and decoupling the governing systems of equations and reducing them into a sequence of subsystems of differential equations which are solved using spectral collocation methods. The applicability of the proposed method, hereinafter referred to as the spectral local linearization method (SLLM, is tested on some well-known boundary layer flow equations. The numerical results presented in this investigation indicate that the proposed method, despite being easy to develop and numerically implement, is very robust in that it converges rapidly to yield accurate results and is more efficient in solving very large systems of nonlinear boundary value problems of the similarity variable boundary layer type. The accuracy and numerical stability of the SLLM can further be improved by using successive overrelaxation techniques.

  1. Performance of Reynolds Averaged Navier-Stokes Models in Predicting Separated Flows: Study of the Hump Flow Model Problem

    Science.gov (United States)

    Cappelli, Daniele; Mansour, Nagi N.

    2012-01-01

    Separation can be seen in most aerodynamic flows, but accurate prediction of separated flows is still a challenging problem for computational fluid dynamics (CFD) tools. The behavior of several Reynolds Averaged Navier-Stokes (RANS) models in predicting the separated ow over a wall-mounted hump is studied. The strengths and weaknesses of the most popular RANS models (Spalart-Allmaras, k-epsilon, k-omega, k-omega-SST) are evaluated using the open source software OpenFOAM. The hump ow modeled in this work has been documented in the 2004 CFD Validation Workshop on Synthetic Jets and Turbulent Separation Control. Only the baseline case is treated; the slot flow control cases are not considered in this paper. Particular attention is given to predicting the size of the recirculation bubble, the position of the reattachment point, and the velocity profiles downstream of the hump.

  2. Mixed integer linear programming for maximum-parsimony phylogeny inference.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2008-01-01

    Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.

  3. A Priority Rule-Based Heuristic for Resource Investment Project Scheduling Problem with Discounted Cash Flows and Tardiness Penalties

    Directory of Open Access Journals (Sweden)

    Amir Abbas Najafi

    2009-01-01

    Full Text Available Resource investment problem with discounted cash flows (RIPDCFs is a class of project scheduling problem. In RIPDCF, the availability levels of the resources are considered decision variables, and the goal is to find a schedule such that the net present value of the project cash flows optimizes. In this paper, we consider a new RIPDCF in which tardiness of project is permitted with defined penalty. We mathematically formulated the problem and developed a heuristic method to solve it. The results of the performance analysis of the proposed method show an effective solution approach to the problem.

  4. A State-of-the-Art Review of the Sensor Location, Flow Observability, Estimation, and Prediction Problems in Traffic Networks

    Directory of Open Access Journals (Sweden)

    Enrique Castillo

    2015-01-01

    Full Text Available A state-of-the-art review of flow observability, estimation, and prediction problems in traffic networks is performed. Since mathematical optimization provides a general framework for all of them, an integrated approach is used to perform the analysis of these problems and consider them as different optimization problems whose data, variables, constraints, and objective functions are the main elements that characterize the problems proposed by different authors. For example, counted, scanned or “a priori” data are the most common data sources; conservation laws, flow nonnegativity, link capacity, flow definition, observation, flow propagation, and specific model requirements form the most common constraints; and least squares, likelihood, possible relative error, mean absolute relative error, and so forth constitute the bases for the objective functions or metrics. The high number of possible combinations of these elements justifies the existence of a wide collection of methods for analyzing static and dynamic situations.

  5. FlowMax: A Computational Tool for Maximum Likelihood Deconvolution of CFSE Time Courses.

    Directory of Open Access Journals (Sweden)

    Maxim Nikolaievich Shokhirev

    Full Text Available The immune response is a concerted dynamic multi-cellular process. Upon infection, the dynamics of lymphocyte populations are an aggregate of molecular processes that determine the activation, division, and longevity of individual cells. The timing of these single-cell processes is remarkably widely distributed with some cells undergoing their third division while others undergo their first. High cell-to-cell variability and technical noise pose challenges for interpreting popular dye-dilution experiments objectively. It remains an unresolved challenge to avoid under- or over-interpretation of such data when phenotyping gene-targeted mouse models or patient samples. Here we develop and characterize a computational methodology to parameterize a cell population model in the context of noisy dye-dilution data. To enable objective interpretation of model fits, our method estimates fit sensitivity and redundancy by stochastically sampling the solution landscape, calculating parameter sensitivities, and clustering to determine the maximum-likelihood solution ranges. Our methodology accounts for both technical and biological variability by using a cell fluorescence model as an adaptor during population model fitting, resulting in improved fit accuracy without the need for ad hoc objective functions. We have incorporated our methodology into an integrated phenotyping tool, FlowMax, and used it to analyze B cells from two NFκB knockout mice with distinct phenotypes; we not only confirm previously published findings at a fraction of the expended effort and cost, but reveal a novel phenotype of nfkb1/p105/50 in limiting the proliferative capacity of B cells following B-cell receptor stimulation. In addition to complementing experimental work, FlowMax is suitable for high throughput analysis of dye dilution studies within clinical and pharmacological screens with objective and quantitative conclusions.

  6. CCC, Heat Flow and Mass Flow in Liquid Saturated Porous Media

    International Nuclear Information System (INIS)

    Mangold, D.C.; Lippmann, M.J.; Bodvarsson, G.S.

    1982-01-01

    1 - Description of problem or function: The numerical model CCC (conduction-convection-consolidation) solves the heat and mass flow equations for a fully, liquid-saturated, anisotropic porous medium and computes one-dimensional (vertical) consolidation of the simulated systems. The model has been applied to problems in the fields of geothermal reservoir engineering, aquifer thermal energy storage, well testing, radioactive waste isolation, and in situ coal combustion. The code has been validated against analytic solutions for fluid and heat flow, and against a field experiment for underground storage of hot water. 2 - Method of solution: The model employs the Integrated Finite Difference Method (IFDM) in discretizing the saturated porous medium and formulating the governing equations. The sets of equations are sol- ved by an iterative solution technique. The vertical deformation of the medium is calculated using the one-dimensional consolidation theory of Terzaghi. 3 - Restrictions on the complexity of the problem: Maximum of 12 materials. It is assumed that: (a) Darcy's law adequately describes fluid movement through fractured and porous media. (b) The rock and fluid are in thermal equilibrium at any given time. (c) Energy changes due to the fluid compressibility, acceleration and viscous dissipation are neglected. (d) One-dimensional consolidation theory adequately describes the vertical deformation of the medium

  7. Evaluating the performance of constructive heuristics for the blocking flow shop scheduling problem with setup times

    Directory of Open Access Journals (Sweden)

    Mauricio Iwama Takano

    2019-01-01

    Full Text Available This paper addresses the minimization of makespan for the permutation flow shop scheduling problem with blocking and sequence and machine dependent setup times, a problem not yet studied in previous studies. The 14 best known heuristics for the permutation flow shop problem with blocking and no setup times are pre-sented and then adapted to the problem in two different ways; resulting in 28 different heuristics. The heuristics are then compared using the Taillard database. As there is no other work that addresses the problem with blocking and sequence and ma-chine dependent setup times, a database for the setup times was created. The setup time value was uniformly distributed between 1% and 10%, 50%, 100% and 125% of the processing time value. Computational tests are then presented for each of the 28 heuristics, comparing the mean relative deviation of the makespan, the computational time and the percentage of successes of each method. Results show that the heuristics were capable of providing interesting results.

  8. The Average Network Flow Problem: Shortest Path and Minimum Cost Flow Formulations, Algorithms, Heuristics, and Complexity

    Science.gov (United States)

    2012-09-13

    46, 1989. [75] S. Melkote and M.S. Daskin . An integrated model of facility location and transportation network design. Transportation Research Part A ... a work of the U.S. Government and is not subject to copyright protection in the United States. AFIT/DS/ENS/12-09 THE AVERAGE NETWORK FLOW PROBLEM...focused thinking (VFT) are used sparingly, as is the case across the entirety of the supply chain literature. We provide a VFT tutorial for supply chain

  9. Vectorization on the star computer of several numerical methods for a fluid flow problem

    Science.gov (United States)

    Lambiotte, J. J., Jr.; Howser, L. M.

    1974-01-01

    A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.

  10. Self-organizing hybrid Cartesian grid generation and application to external and internal flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Deister, F.; Hirschel, E.H. [Univ. Stuttgart, IAG, Stuttgart (Germany); Waymel, F.; Monnoyer, F. [Univ. de Valenciennes, LME, Valenciennes (France)

    2003-07-01

    An automatic adaptive hybrid Cartesian grid generation and simulation system is presented together with applications. The primary computational grid is an octree Cartesian grid. A quasi-prismatic grid may be added for resolving the boundary layer region of viscous flow around the solid body. For external flow simulations the flow solver TAU from the ''deutsche zentrum fuer luft- und raumfahrt (DLR)'' is integrated in the simulation system. Coarse grids are generated automatically, which are required by the multilevel method. As an application to an internal problem the thermal and dynamic modeling of a subway station is presented. (orig.)

  11. Maximum entropy estimation via Gauss-LP quadratures

    NARCIS (Netherlands)

    Thély, Maxime; Sutter, Tobias; Mohajerin Esfahani, P.; Lygeros, John; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    2017-01-01

    We present an approximation method to a class of parametric integration problems that naturally appear when solving the dual of the maximum entropy estimation problem. Our method builds up on a recent generalization of Gauss quadratures via an infinite-dimensional linear program, and utilizes a

  12. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    Science.gov (United States)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-05-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  13. The Cauchy problem for a model of immiscible gas flow with large data

    Energy Technology Data Exchange (ETDEWEB)

    Sande, Hilde

    2008-12-15

    The thesis consists of an introduction and two papers; 1. The solution of the Cauchy problem with large data for a model of a mixture of gases. 2. Front tracking for a model of immiscible gas flow with large data. (AG) refs, figs

  14. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....

  15. Scrutiny of underdeveloped nanofluid MHD flow and heat conduction in a channel with porous walls

    Directory of Open Access Journals (Sweden)

    M. Fakour

    2014-11-01

    Full Text Available In this paper, laminar fluid flow and heat transfer in channel with permeable walls in the presence of a transverse magnetic field is investigated. Least square method (LSM for computing approximate solutions of nonlinear differential equations governing the problem. We have tried to show reliability and performance of the present method compared with the numerical method (Runge–Kutta fourth-rate to solve this problem. The influence of the four dimensionless numbers: the Hartmann number, Reynolds number, Prandtl number and Eckert number on non-dimensional velocity and temperature profiles are considered. The results show analytical present method is very close to numerically method. In general, increasing the Reynolds and Hartman number is reduces the nanofluid flow velocity in the channel and the maximum amount of temperature increase and increasing the Prandtl and Eckert number will increase the maximum amount of theta.

  16. Extension of CFD Codes Application to Two-Phase Flow Safety Problems - Phase 3

    International Nuclear Information System (INIS)

    Bestion, D.; Anglart, H.; Mahaffy, J.; Lucas, D.; Song, C.H.; Scheuerer, M.; Zigh, G.; Andreani, M.; Kasahara, F.; Heitsch, M.; Komen, E.; Moretti, F.; Morii, T.; Muehlbauer, P.; Smith, B.L.; Watanabe, T.

    2014-11-01

    The Writing Group 3 on the extension of CFD to two-phase flow safety problems was formed following recommendations made at the 'Exploratory Meeting of Experts to Define an Action Plan on the Application of Computational Fluid Dynamics (CFD) Codes to Nuclear Reactor Safety Problems' held in Aix-en-Provence, in May 2002. Extension of CFD codes to two-phase flow is significant potentiality for the improvement of safety investigations, by giving some access to smaller scale flow processes which were not explicitly described by present tools. Using such tools as part of a safety demonstration may bring a better understanding of physical situations, more confidence in the results, and an estimation of safety margins. The increasing computer performance allows a more extensive use of 3D modelling of two-phase Thermal hydraulics with finer nodalization. However, models are not as mature as in single phase flow and a lot of work has still to be done on the physical modelling and numerical schemes in such two-phase CFD tools. The Writing Group listed and classified the NRS problems where extension of CFD to two-phase flow may bring real benefit, and classified different modelling approaches in a first report (Bestion et al., 2006). First ideas were reported about the specification and analysis of needs in terms of validation and verification. It was then suggested to focus further activity on a limited number of NRS issues with a high priority and a reasonable chance to be successful in a reasonable period of time. The WG3-step 2 was decided with the following objectives: - selection of a limited number of NRS issues having a high priority and for which two-phase CFD has a reasonable chance to be successful in a reasonable period of time; - identification of the remaining gaps in the existing approaches using two-phase CFD for each selected NRS issue; - review of the existing data base for validation of two-phase CFD application to the selected NRS problems

  17. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  18. Maximum-Entropy Inference with a Programmable Annealer

    Science.gov (United States)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  19. Evolutionary Hybrid Particle Swarm Optimization Algorithm for Solving NP-Hard No-Wait Flow Shop Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Laxmi A. Bewoor

    2017-10-01

    Full Text Available The no-wait flow shop is a flowshop in which the scheduling of jobs is continuous and simultaneous through all machines without waiting for any consecutive machines. The scheduling of a no-wait flow shop requires finding an appropriate sequence of jobs for scheduling, which in turn reduces total processing time. The classical brute force method for finding the probabilities of scheduling for improving the utilization of resources may become trapped in local optima, and this problem can hence be observed as a typical NP-hard combinatorial optimization problem that requires finding a near optimal solution with heuristic and metaheuristic techniques. This paper proposes an effective hybrid Particle Swarm Optimization (PSO metaheuristic algorithm for solving no-wait flow shop scheduling problems with the objective of minimizing the total flow time of jobs. This Proposed Hybrid Particle Swarm Optimization (PHPSO algorithm presents a solution by the random key representation rule for converting the continuous position information values of particles to a discrete job permutation. The proposed algorithm initializes population efficiently with the Nawaz-Enscore-Ham (NEH heuristic technique and uses an evolutionary search guided by the mechanism of PSO, as well as simulated annealing based on a local neighborhood search to avoid getting stuck in local optima and to provide the appropriate balance of global exploration and local exploitation. Extensive computational experiments are carried out based on Taillard’s benchmark suite. Computational results and comparisons with existing metaheuristics show that the PHPSO algorithm outperforms the existing methods in terms of quality search and robustness for the problem considered. The improvement in solution quality is confirmed by statistical tests of significance.

  20. PIV and CFD studies on analyzing intragastric flow phenomena induced by peristalsis using a human gastric flow simulator.

    Science.gov (United States)

    Kozu, Hiroyuki; Kobayashi, Isao; Neves, Marcos A; Nakajima, Mitsutoshi; Uemura, Kunihiko; Sato, Seigo; Ichikawa, Sosaku

    2014-08-01

    This study quantitatively analyzed the flow phenomena in model gastric contents induced by peristalsis using a human gastric flow simulator (GFS). Major functions of the GFS include gastric peristalsis simulation by controlled deformation of rubber walls and direct observation of inner flow through parallel transparent windows. For liquid gastric contents (water and starch syrup solutions), retropulsive flow against the direction of peristalsis was observed using both particle image velocimetry (PIV) and computational fluid dynamics (CFD). The maximum flow velocity was obtained in the region occluded by peristalsis. The maximum value was 9 mm s(-1) when the standard value of peristalsis speed in healthy adults (UACW = 2.5 mm s(-1)) was applied. The intragastric flow-field was laminar with the maximum Reynolds number (Re = 125). The viscosity of liquid gastric contents hardly affected the maximum flow velocity in the applied range of this study (1 to 100 mPa s). These PIV results agreed well with the CFD results. The maximum shear rate in the liquid gastric contents was below 20 s(-1) at UACW = 2.5 mm s(-1). We also measured the flow-field in solid-liquid gastric contents containing model solid food particles (plastic beads). The direction of velocity vectors was influenced by the presence of the model solid food particle surface. The maximum flow velocity near the model solid food particles ranged from 8 to 10 mm s(-1) at UACW = 2.5 mm s(-1). The maximum shear rate around the model solid food particles was low, with a value of up to 20 s(-1).

  1. The discrete maximum principle for Galerkin solutions of elliptic problems

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš

    2012-01-01

    Roč. 10, č. 1 (2012), s. 25-43 ISSN 1895-1074 R&D Projects: GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : discrete maximum principle * monotone methods * Galerkin solution Subject RIV: BA - General Mathematics Impact factor: 0.405, year: 2012 http://www.springerlink.com/content/x73624wm23x4wj26

  2. A steady state solution for ditch drainage problem with special reference to seepage face and unsaturated zone flow contribution: Derivation of a new drainage spacing eqaution

    Science.gov (United States)

    Yousfi, Ammar; Mechergui, Mohammed

    2016-04-01

    al. (2001). In this work, a novel solution based on theoretical approach will be adapted to incorporate both the seepage face and the unsaturated zone flow contribution for solving ditch drained aquifers problems. This problem will be tackled on the basis of the approximate 2D solution given by Castro-Orgaz et al. (2012). This given solution yields the generalized water table profile function with a suitable boundary condition to be determined and provides a modified DF theory which permits as an outcome the analytical determination of the seepage face. To assess the ability of the developed equation for water-table estimations, the obtained results were compared with numerical solutions to the 2-D problem under different conditions. It is shown that results are in fair agreement and thus the resulting model can be used for designing ditch drainage systems. With respect to drainage design, the spacings calculated with the newly derived equation are compared with those computed from the DF theory. It is shown that the effect of the unsaturated zone flow contribution is limited to sandy soils and The calculated maximum increase in drain spacing is about 30%. Keywords: subsurface ditch drainage; unsaturated zone; seepage face; water-table, ditch spacing equation

  3. Coronary ligation reduces maximum sustained swimming speed in Chinook salmon, Oncorhynchus tshawytscha

    DEFF Research Database (Denmark)

    Farrell, A P; Steffensen, J F

    1987-01-01

    The maximum aerobic swimming speed of Chinook salmon (Oncorhynchus tshawytscha) was measured before and after ligation of the coronary artery. Coronary artery ligation prevented blood flow to the compact layer of the ventricular myocardium, which represents 30% of the ventricular mass, and produced...... a statistically significant 35.5% reduction in maximum swimming speed. We conclude that the coronary circulation is important for maximum aerobic swimming and implicit in this conclusion is that maximum cardiac performance is probably necessary for maximum aerobic swimming performance....

  4. Numerical Study on Several Stabilized Finite Element Methods for the Steady Incompressible Flow Problem with Damping

    Directory of Open Access Journals (Sweden)

    Jilian Wu

    2013-01-01

    Full Text Available We discuss several stabilized finite element methods, which are penalty, regular, multiscale enrichment, and local Gauss integration method, for the steady incompressible flow problem with damping based on the lowest equal-order finite element space pair. Then we give the numerical comparisons between them in three numerical examples which show that the local Gauss integration method has good stability, efficiency, and accuracy properties and it is better than the others for the steady incompressible flow problem with damping on the whole. However, to our surprise, the regular method spends less CPU-time and has better accuracy properties by using Crout solver.

  5. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  6. Exact partial solution to the steady-state, compressible fluid flow problems of jet formation and jet penetration

    International Nuclear Information System (INIS)

    Karpp, R.R.

    1980-10-01

    This report treats analytically the problem of the symmetric impact of two compressible fluid streams. The flow is assumed to be steady, plane, inviscid, and subsonic and that the compressible fluid is of the Chaplygin (tangent gas) type. In the analysis, the governing equations are first transformed to the hodograph plane where an exact, closed-form solution is obtained by standard techniques. The distributions of fluid properties along the plane of symmetry as well as the shapes of the boundary streamlines are exactly determined by transforming the solution back to the physical plane. The problem of a compressible fluid jet penetrating into an infinite target of similar material is also exactly solved by considering a limiting case of this solution. This new compressible flow solution reduces to the classical result of incompressible flow theory when the sound speed of the fluid is allowed to approach infinity. Several illustrations of the differences between compressible and incompressible flows of the type considered are presented

  7. A Data Flow Model to Solve the Data Distribution Changing Problem in Machine Learning

    Directory of Open Access Journals (Sweden)

    Shang Bo-Wen

    2016-01-01

    Full Text Available Continuous prediction is widely used in broad communities spreading from social to business and the machine learning method is an important method in this problem.When we use the machine learning method to predict a problem. We use the data in the training set to fit the model and estimate the distribution of data in the test set.But when we use machine learning to do the continuous prediction we get new data as time goes by and use the data to predict the future data, there may be a problem. As the size of the data set increasing over time, the distribution changes and there will be many garbage data in the training set.We should remove the garbage data as it reduces the accuracy of the prediction. The main contribution of this article is using the new data to detect the timeliness of historical data and remove the garbage data.We build a data flow model to describe how the data flow among the test set, training set, validation set and the garbage set and improve the accuracy of prediction. As the change of the data set, the best machine learning model will change.We design a hybrid voting algorithm to fit the data set better that uses seven machine learning models predicting the same problem and uses the validation set putting different weights on the learning models to give better model more weights. Experimental results show that, when the distribution of the data set changes over time, our time flow model can remove most of the garbage data and get a better result than the traditional method that adds all the data to the data set; our hybrid voting algorithm has a better prediction result than the average accuracy of other predict models

  8. Applying Graph Theory to Problems in Air Traffic Management

    Science.gov (United States)

    Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo

    2017-01-01

    Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.

  9. Study of heat transfer and flow of nanofluid in permeable channel in the presence of magnetic field

    Directory of Open Access Journals (Sweden)

    M. Fakour

    2015-03-01

    Full Text Available In this paper, laminar fluid flow and heat transfer in channel with permeable walls in the presence of a transverse magnetic field is investigated. Least square method (LSM for computing approximate solutions of nonlinear differential equations governing the problem. We have tried to show reliability and performance of the present method compared with the numerical method (Runge-Kutta fourth-rate to solve this problem. The influence of the four dimensionless numbers: the Hartmann number, Reynolds number, Prandtl number and Eckert number on non-dimensional velocity and temperature profiles are considered. The results show analytical present method is very close to numerically method. In general, increasing the Reynolds and Hartman number is reduces the nanofluid flow velocity in the channel and the maximum amount of temperature increase and increasing the Prandtl and Eckert number will increase the maximum amount of theta.

  10. Learning Based Approach for Optimal Clustering of Distributed Program's Call Flow Graph

    Science.gov (United States)

    Abofathi, Yousef; Zarei, Bager; Parsa, Saeed

    Optimal clustering of call flow graph for reaching maximum concurrency in execution of distributable components is one of the NP-Complete problems. Learning automatas (LAs) are search tools which are used for solving many NP-Complete problems. In this paper a learning based algorithm is proposed to optimal clustering of call flow graph and appropriate distributing of programs in network level. The algorithm uses learning feature of LAs to search in state space. It has been shown that the speed of reaching to solution increases remarkably using LA in search process, and it also prevents algorithm from being trapped in local minimums. Experimental results show the superiority of proposed algorithm over others.

  11. Maximum Range of a Projectile Thrown from Constant-Speed Circular Motion

    Science.gov (United States)

    Poljak, Nikola

    2016-01-01

    The problem of determining the angle ? at which a point mass launched from ground level with a given speed v[subscript 0] will reach a maximum distance is a standard exercise in mechanics. There are many possible ways of solving this problem, leading to the well-known answer of ? = p/4, producing a maximum range of D[subscript max] = v[superscript…

  12. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  13. Post optimization paradigm in maximum 3-satisfiability logic programming

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    Maximum 3-Satisfiability (MAX-3SAT) is a counterpart of the Boolean satisfiability problem that can be treated as a constraint optimization problem. It deals with a conundrum of searching the maximum number of satisfied clauses in a particular 3-SAT formula. This paper presents the implementation of enhanced Hopfield network in hastening the Maximum 3-Satisfiability (MAX-3SAT) logic programming. Four post optimization techniques are investigated, including the Elliot symmetric activation function, Gaussian activation function, Wavelet activation function and Hyperbolic tangent activation function. The performances of these post optimization techniques in accelerating MAX-3SAT logic programming will be discussed in terms of the ratio of maximum satisfied clauses, Hamming distance and the computation time. Dev-C++ was used as the platform for training, testing and validating our proposed techniques. The results depict the Hyperbolic tangent activation function and Elliot symmetric activation function can be used in doing MAX-3SAT logic programming.

  14. Maximum discharge rate of liquid-vapor mixtures from vessels

    International Nuclear Information System (INIS)

    Moody, F.J.

    1975-09-01

    A discrepancy exists in theoretical predictions of the two-phase equilibrium discharge rate from pipes attached to vessels. Theory which predicts critical flow data in terms of pipe exit pressure and quality severely overpredicts flow rates in terms of vessel fluid properties. This study shows that the discrepancy is explained by the flow pattern. Due to decompression and flashing as fluid accelerates into the pipe entrance, the maximum discharge rate from a vessel is limited by choking of a homogeneous bubbly mixture. The mixture tends toward a slip flow pattern as it travels through the pipe, finally reaching a different choked condition at the pipe exit

  15. On discrete maximum principles for nonlinear elliptic problems

    Czech Academy of Sciences Publication Activity Database

    Karátson, J.; Korotov, S.; Křížek, Michal

    2007-01-01

    Roč. 76, č. 1 (2007), s. 99-108 ISSN 0378-4754 R&D Projects: GA MŠk 1P05ME749; GA AV ČR IAA1019201 Institutional research plan: CEZ:AV0Z10190503 Keywords : nonlinear elliptic problem * mixed boundary conditions * finite element method Subject RIV: BA - General Mathematics Impact factor: 0.738, year: 2007

  16. The inverse Fourier problem in the case of poor resolution in one given direction: the maximum-entropy solution

    International Nuclear Information System (INIS)

    Papoular, R.J.; Zheludev, A.; Ressouche, E.; Schweizer, J.

    1995-01-01

    When density distributions in crystals are reconstructed from 3D diffraction data, a problem sometimes occurs when the spatial resolution in one given direction is very small compared to that in perpendicular directions. In this case, a 2D projected density is usually reconstructed. For this task, the conventional Fourier inversion method only makes use of those structure factors measured in the projection plane. All the other structure factors contribute zero to the reconstruction of a projected density. On the contrary, the maximum-entropy method uses all the 3D data, to yield 3D-enhanced 2D projected density maps. It is even possible to reconstruct a projection in the extreme case when not one structure factor in the plane of projection is known. In the case of poor resolution along one given direction, a Fourier inversion reconstruction gives very low quality 3D densities 'smeared' in the third dimension. The application of the maximum-entropy procedure reduces the smearing significantly and reasonably well resolved projections along most directions can now be obtained from the MaxEnt 3D density. To illustrate these two ideas, particular examples based on real polarized neutron diffraction data sets are presented. (orig.)

  17. Applications of high-resolution spatial discretization scheme and Jacobian-free Newton–Krylov method in two-phase flow problems

    International Nuclear Information System (INIS)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2015-01-01

    Highlights: • Using high-resolution spatial scheme in solving two-phase flow problems. • Fully implicit time integrations scheme. • Jacobian-free Newton–Krylov method. • Analytical solution for two-phase water faucet problem. - Abstract: The majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many nuclear thermal–hydraulics applications, it is desirable to use higher-order numerical schemes to reduce numerical errors. High-resolution spatial discretization schemes provide high order spatial accuracy in smooth regions and capture sharp spatial discontinuity without nonphysical spatial oscillations. In this work, we adapted an existing high-resolution spatial discretization scheme on staggered grids in two-phase flow applications. Fully implicit time integration schemes were also implemented to reduce numerical errors from operator-splitting types of time integration schemes. The resulting nonlinear system has been successfully solved using the Jacobian-free Newton–Krylov (JFNK) method. The high-resolution spatial discretization and high-order fully implicit time integration numerical schemes were tested and numerically verified for several two-phase test problems, including a two-phase advection problem, a two-phase advection with phase appearance/disappearance problem, and the water faucet problem. Numerical results clearly demonstrated the advantages of using such high-resolution spatial and high-order temporal numerical schemes to significantly reduce numerical diffusion and therefore improve accuracy. Our study also demonstrated that the JFNK method is stable and robust in solving two-phase flow problems, even when phase appearance/disappearance exists

  18. On semidefinite programming relaxations of maximum k-section

    NARCIS (Netherlands)

    de Klerk, E.; Pasechnik, D.V.; Sotirov, R.; Dobre, C.

    2012-01-01

    We derive a new semidefinite programming bound for the maximum k -section problem. For k=2 (i.e. for maximum bisection), the new bound is at least as strong as a well-known bound by Poljak and Rendl (SIAM J Optim 5(3):467–487, 1995). For k ≥ 3the new bound dominates a bound of Karisch and Rendl

  19. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  20. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  1. Finite size scaling analysis on Nagel-Schreckenberg model for traffic flow

    Science.gov (United States)

    Balouchi, Ashkan; Browne, Dana

    2015-03-01

    The traffic flow problem as a many-particle non-equilibrium system has caught the interest of physicists for decades. Understanding the traffic flow properties and though obtaining the ability to control the transition from the free-flow phase to the jammed phase plays a critical role in the future world of urging self-driven cars technology. We have studied phase transitions in one-lane traffic flow through the mean velocity, distributions of car spacing, dynamic susceptibility and jam persistence -as candidates for an order parameter- using the Nagel-Schreckenberg model to simulate traffic flow. The length dependent transition has been observed for a range of maximum velocities greater than a certain value. Finite size scaling analysis indicates power-law scaling of these quantities at the onset of the jammed phase.

  2. The Granular Blasius Problem: High inertial number granular flows

    Science.gov (United States)

    Tsang, Jonathan; Dalziel, Stuart; Vriend, Nathalie

    2017-11-01

    The classical Blasius problem considers the formation of a boundary layer through the change at x = 0 from a free-slip to a no-slip boundary beneath an otherwise steady uniform flow. Discrete particle model (DPM) simulations of granular gravity currents show that a similar phenomenon exists for a steady flow over a uniformly sloped surface that is smooth upstream (allowing slip) but rough downstream (imposing a no-slip condition). The boundary layer is a region of high shear rate and therefore high inertial number I; its dynamics are governed by the asymptotic behaviour of the granular rheology as I -> ∞ . The μ(I) rheology asserts that dμ / dI = O(1 /I2) as I -> ∞ , but current experimental evidence is insufficient to confirm this. We show that `generalised μ(I) rheologies', with different behaviours as I -> ∞ , all permit the formation of a boundary layer. We give approximate solutions for the velocity profile under each rheology. The change in boundary condition considered here mimics more complex topography in which shear stress increases in the streamwise direction (e.g. a curved slope). Such a system would be of interest in avalanche modelling. EPSRC studentship (Tsang) and Royal Society Dorothy Hodgkin Fellowship (Vriend).

  3. Maximum Profit Configurations of Commercial Engines

    Directory of Open Access Journals (Sweden)

    Yiran Chen

    2011-06-01

    Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.

  4. Two problems in multiphase biological flows: Blood flow and particulate transport in microvascular network, and pseudopod-driven motility of amoeboid cells

    Science.gov (United States)

    Bagchi, Prosenjit

    2016-11-01

    In this talk, two problems in multiphase biological flows will be discussed. The first is the direct numerical simulation of whole blood and drug particulates in microvascular networks. Blood in microcirculation behaves as a dense suspension of heterogeneous cells. The erythrocytes are extremely deformable, while inactivated platelets and leukocytes are nearly rigid. A significant progress has been made in recent years in modeling blood as a dense cellular suspension. However, many of these studies considered the blood flow in simple geometry, e.g., straight tubes of uniform cross-section. In contrast, the architecture of a microvascular network is very complex with bifurcating, merging and winding vessels, posing a further challenge to numerical modeling. We have developed an immersed-boundary-based method that can consider blood cell flow in physiologically realistic and complex microvascular network. In addition to addressing many physiological issues related to network hemodynamics, this tool can be used to optimize the transport properties of drug particulates for effective organ-specific delivery. Our second problem is pseudopod-driven motility as often observed in metastatic cancer cells and other amoeboid cells. We have developed a multiscale hydrodynamic model to simulate such motility. We study the effect of cell stiffness on motility as the former has been considered as a biomarker for metastatic potential. Funded by the National Science Foundation.

  5. New Mathematical Model and Algorithm for Economic Lot Scheduling Problem in Flexible Flow Shop

    Directory of Open Access Journals (Sweden)

    H. Zohali

    2018-03-01

    Full Text Available This paper addresses the lot sizing and scheduling problem for a number of products in flexible flow shop with identical parallel machines. The production stages are in series, while separated by finite intermediate buffers. The objective is to minimize the sum of setup and inventory holding costs per unit of time. The available mathematical model of this problem in the literature suffers from huge complexity in terms of size and computation. In this paper, a new mixed integer linear program is developed for delay with the huge dimentions of the problem. Also, a new meta heuristic algorithm is developed for the problem. The results of the numerical experiments represent a significant advantage of the proposed model and algorithm compared with the available models and algorithms in the literature.

  6. Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Ge Cheng

    2016-12-01

    Full Text Available Predicting the outcome of National Basketball Association (NBA matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that the model is able to predict the winning team with 74.4% accuracy, outperforming other classical machine learning algorithms that could only afford a maximum prediction accuracy of 70.6% in the experiments that we performed.

  7. Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle

    OpenAIRE

    Ge Cheng; Zhenyu Zhang; Moses Ntanda Kyebambe; Nasser Kimbugwe

    2016-01-01

    Predicting the outcome of National Basketball Association (NBA) matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME) model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that...

  8. A practical exact maximum compatibility algorithm for reconstruction of recent evolutionary history.

    Science.gov (United States)

    Cherry, Joshua L

    2017-02-23

    Maximum compatibility is a method of phylogenetic reconstruction that is seldom applied to molecular sequences. It may be ideal for certain applications, such as reconstructing phylogenies of closely-related bacteria on the basis of whole-genome sequencing. Here I present an algorithm that rapidly computes phylogenies according to a compatibility criterion. Although based on solutions to the maximum clique problem, this algorithm deals properly with ambiguities in the data. The algorithm is applied to bacterial data sets containing up to nearly 2000 genomes with several thousand variable nucleotide sites. Run times are several seconds or less. Computational experiments show that maximum compatibility is less sensitive than maximum parsimony to the inclusion of nucleotide data that, though derived from actual sequence reads, has been identified as likely to be misleading. Maximum compatibility is a useful tool for certain phylogenetic problems, such as inferring the relationships among closely-related bacteria from whole-genome sequence data. The algorithm presented here rapidly solves fairly large problems of this type, and provides robustness against misleading characters than can pollute large-scale sequencing data.

  9. Maximum-entropy networks pattern detection, network reconstruction and graph combinatorics

    CERN Document Server

    Squartini, Tiziano

    2017-01-01

    This book is an introduction to maximum-entropy models of random graphs with given topological properties and their applications. Its original contribution is the reformulation of many seemingly different problems in the study of both real networks and graph theory within the unified framework of maximum entropy. Particular emphasis is put on the detection of structural patterns in real networks, on the reconstruction of the properties of networks from partial information, and on the enumeration and sampling of graphs with given properties.  After a first introductory chapter explaining the motivation, focus, aim and message of the book, chapter 2 introduces the formal construction of maximum-entropy ensembles of graphs with local topological constraints. Chapter 3 focuses on the problem of pattern detection in real networks and provides a powerful way to disentangle nontrivial higher-order structural features from those that can be traced back to simpler local constraints. Chapter 4 focuses on the problem o...

  10. A multi-objective optimization problem for multi-state series-parallel systems: A two-stage flow-shop manufacturing system

    International Nuclear Information System (INIS)

    Azadeh, A.; Maleki Shoja, B.; Ghanei, S.; Sheikhalishahi, M.

    2015-01-01

    This research investigates a redundancy-scheduling optimization problem for a multi-state series parallel system. The system is a flow shop manufacturing system with multi-state machines. Each manufacturing machine may have different performance rates including perfect performance, decreased performance and complete failure. Moreover, warm standby redundancy is considered for the redundancy allocation problem. Three objectives are considered for the problem: (1) minimizing system purchasing cost, (2) minimizing makespan, and (3) maximizing system reliability. Universal generating function is employed to evaluate system performance and overall reliability of the system. Since the problem is in the NP-hard class of combinatorial problems, genetic algorithm (GA) is used to find optimal/near optimal solutions. Different test problems are generated to evaluate the effectiveness and efficiency of proposed approach and compared to simulated annealing optimization method. The results show the proposed approach is capable of finding optimal/near optimal solution within a very reasonable time. - Highlights: • A redundancy-scheduling optimization problem for a multi-state series parallel system. • A flow shop with multi-state machines and warm standby redundancy. • Objectives are to optimize system purchasing cost, makespan and reliability. • Different test problems are generated and evaluated by a unique genetic algorithm. • It locates optimal/near optimal solution within a very reasonable time

  11. Possibilities of mathematical models in solving flow problems in environmental protection and water architecture

    Energy Technology Data Exchange (ETDEWEB)

    1979-01-01

    The booklet presents the full text of 13 contributions to a Colloquium held at Karlsruhe in Sept. 1979. The main topics of the papers are the evaluation of mathematical models to solve flow problems in tide water, seas, rivers, groundwater and in the earth atmosphere. See further hints under relevant topics.

  12. Flow over an obstruction with the generation of nonlinear waves on the free surface: Limiting regimes

    International Nuclear Information System (INIS)

    Maklakov, D.V.

    1995-01-01

    A numerical-analytic method of calculating a subcritical flow over an obstruction is proposed. This method is based on the identification of the asymptotics of the behavior of a wave train in unknown functions. The method makes it possible to calculate both steep and long waves. The effectiveness of the method is demonstrated for the problem of flow over a vortex. The concept of the limiting flow regime as a regime with the maximum value of the perturbation parameter for which steady flow still persists is introduced. Various types of the limiting regimes obtained in the calculations are analyzed

  13. 19 mm sized bileaflet valve prostheses' flow field investigated by bidimensional laser Doppler anemometry (part II: maximum turbulent shear stresses)

    Science.gov (United States)

    Barbaro, V; Grigioni, M; Daniele, C; D'Avenio, G; Boccanera, G

    1997-11-01

    The investigation of the flow field generated by cardiac valve prostheses is a necessary task to gain knowledge on the possible relationship between turbulence-derived stresses and the hemolytic and thrombogenic complications in patients after valve replacement. The study of turbulence flows downstream of cardiac prostheses, in literature, especially concerns large-sized prostheses with a variable flow regime from very low up to 6 L/min. The Food and Drug Administration draft guidance requires the study of the minimum prosthetic size at a high cardiac output to reach the maximum Reynolds number conditions. Within the framework of a national research project regarding the characterization of cardiovascular endoprostheses, an in-depth study of turbulence generated downstream of bileaflet cardiac valves is currently under way at the Laboratory of Biomedical Engineering of the Istituto Superiore di Sanita. Four models of 19 mm bileaflet valve prostheses were used: St Jude Medical HP, Edwards Tekna, Sorin Bicarbon, and CarboMedics. The prostheses were selected for the nominal Tissue Annulus Diameter as reported by manufacturers without any assessment of valve sizing method, and were mounted in aortic position. The aortic geometry was scaled for 19 mm prostheses using angiographic data. The turbulence-derived shear stresses were investigated very close to the valve (0.35 D0), using a bidimensional Laser Doppler anemometry system and applying the Principal Stress Analysis. Results concern typical turbulence quantities during a 50 ms window at peak flow in the systolic phase. Conclusions are drawn regarding the turbulence associated to valve design features, as well as the possible damage to blood constituents.

  14. Parallel Processor for 3D Recovery from Optical Flow

    Directory of Open Access Journals (Sweden)

    Jose Hugo Barron-Zambrano

    2009-01-01

    Full Text Available 3D recovery from motion has received a major effort in computer vision systems in the recent years. The main problem lies in the number of operations and memory accesses to be performed by the majority of the existing techniques when translated to hardware or software implementations. This paper proposes a parallel processor for 3D recovery from optical flow. Its main feature is the maximum reuse of data and the low number of clock cycles to calculate the optical flow, along with the precision with which 3D recovery is achieved. The results of the proposed architecture as well as those from processor synthesis are presented.

  15. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  16. On minimizing the maximum broadcast decoding delay for instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.

    2014-09-01

    In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a definition of delay for IDNC allows a more equitable distribution of the delays between the different receivers and thus a better Quality of Service (QoS). In order to solve this problem, we first derive the expressions for the probability distributions of maximum decoding delay increments. Given these expressions, we formulate the problem as a maximum weight clique problem in the IDNC graph. Although this problem is known to be NP-hard, we design a greedy algorithm to perform effective packet selection. Through extensive simulations, we compare the sum decoding delay and the max decoding delay experienced when applying the policies to minimize the sum decoding delay and our policy to reduce the max decoding delay. Simulations results show that our policy gives a good agreement among all the delay aspects in all situations and outperforms the sum decoding delay policy to effectively minimize the sum decoding delay when the channel conditions become harsher. They also show that our definition of delay significantly improve the number of served receivers when they are subject to strict delay constraints.

  17. Some applications of the moving finite element method to fluid flow and related problems

    International Nuclear Information System (INIS)

    Berry, R.A.; Williamson, R.L.

    1983-01-01

    The Moving Finite Element (MFE) method is applied to one-dimensional, nonlinear wave type partial differential equations which are characteristics of fluid dynamic and related flow phenomena problems. These equation systems tend to be difficult to solve because their transient solutions exhibit a spacial stiffness property, i.e., they represent physical phenomena of widely disparate length scales which must be resolved simultaneously. With the MFE method the node points automatically move (in theory) to optimal locations giving a much better approximation than can be obtained with fixed mesh methods (with a reasonable number of nodes) and with significantly reduced artificial viscosity or diffusion content. Three applications are considered. In order of increasing complexity they are: (1) a thermal quench problem, (2) an underwater explosion problem, and (3) a gas dynamics shock tube problem. The results are briefly shown

  18. MHD and heat transfer benchmark problems for liquid metal flow in rectangular ducts

    International Nuclear Information System (INIS)

    Sidorenkov, S.I.; Hua, T.Q.; Araseki, H.

    1994-01-01

    Liquid metal cooling systems of a self-cooled blanket in a tokamak reactor will likely include channels of rectangular cross section where liquid metal is circulated in the presence of strong magnetic fields. MHD pressure drop, velocity distribution and heat transfer characteristics are important issues in the engineering design considerations. Computer codes for the reliable solution of three-dimensional MHD flow problems are needed for fusion relevant conditions. Argonne National Laboratory and The Efremov Institute have jointly defined several benchmark problems for code validation. The problems, described in this paper, are based on two series of rectangular duct experiments conducted at ANL; one of the series is a joint ANL/Efremov experiment. The geometries consist of variation of aspect ratio and wall thickness (thus wall conductance ratio). The transverse magnetic fields are uniform and nonuniform in the axial direction

  19. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm f...... for this problem has O(n^2) running time. We show how to obtain O(n^{3/2}*(log n)^3) expected running time. We also show that if G has bounded treewidth, its maximum detour can be computed in O(n*(log n)^3) expected time....

  20. Stable Galerkin versus equal-order Galerkin least-squares elements for the stokes flow problem

    International Nuclear Information System (INIS)

    Franca, L.P.; Frey, S.L.; Sampaio, R.

    1989-11-01

    Numerical experiments are performed for the stokes flow problem employing a stable Galerkin method and a Galerkin/Least-squares method with equal-order elements. Error estimates for the methods tested herein are reviewed. The numerical results presented attest the good stability properties of all methods examined herein. (A.C.A.S.) [pt

  1. Direct maximum parsimony phylogeny reconstruction from genotype data

    OpenAIRE

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-01-01

    Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of ge...

  2. Asymptotic Analysis of SPTA-Based Algorithms for No-Wait Flow Shop Scheduling Problem with Release Dates

    Directory of Open Access Journals (Sweden)

    Tao Ren

    2014-01-01

    Full Text Available We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.

  3. Asymptotic analysis of SPTA-based algorithms for no-wait flow shop scheduling problem with release dates.

    Science.gov (United States)

    Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang

    2014-01-01

    We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.

  4. Maximum-likelihood estimation of the hyperbolic parameters from grouped observations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1988-01-01

    a least-squares problem. The second procedure Hypesti first approaches the maximum-likelihood estimate by iterating in the profile-log likelihood function for the scale parameter. Close to the maximum of the likelihood function, the estimation is brought to an end by iteration, using all four parameters...

  5. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  6. Prediction of flow- induced dynamic stress in an axial pump impeller using FEM

    International Nuclear Information System (INIS)

    Gao, J Y; Hou, Y S; Xi, S Z; Cai, Z H; Yao, P P; Shi, H L

    2013-01-01

    Axial pumps play an important role in water supply and flood control projects. Along with growing requirements for high reliability and large capacity, the dynamic stress of axial pumps has become a key problem. Unsteady flow is a significant reason which results structural dynamic stress of a pump. This paper reports on a flow-induced dynamic stress simulation in an axial pump impeller at three flow conditions by using FEM code. The pressure pulsation obtained from flow simulation using CFD code was set as the force boundary condition. The results show that the maximum stress of impeller appeared at joint between blade and root flange near trailing edge or joint between blade and root flange near leading edge. The dynamic stress of the two zones was investigated under three flow conditions (0.8Q d , 1.0Q d , 1.1Q d ) in time domain and frequency domain. The frequencies of stress at zones of maximum stress are 22.9Hz and 37.5Hz as the fundamental frequency and its harmonics. The fundamental frequencies are nearly equal to vane passing frequency (22.9 Hz) and 3 times blade passing frequency (37.5Hz). The first dominant frequency at zones of maximum stress is equal to the vane passing frequency due to rotor-stator interaction between the vane and the blade. This study would be helpful for axial pumps in reducing stress, improving structure design and fatigue life

  7. Use of a genetic algorithm to solve two-fluid flow problems on an NCUBE multiprocessor computer

    International Nuclear Information System (INIS)

    Pryor, R.J.; Cline, D.D.

    1992-01-01

    A method of solving the two-phase fluid flow equations using a genetic algorithm on a NCUBE multiprocessor computer is presented. The topics discussed are the two-phase flow equations, the genetic representation of the unknowns, the fitness function, the genetic operators, and the implementation of the algorithm on the NCUBE computer. The efficiency of the implementation is investigated using a pipe blowdown problem. Effects of varying the genetic parameters and the number of processors are presented

  8. Use of a genetic agorithm to solve two-fluid flow problems on an NCUBE multiprocessor computer

    International Nuclear Information System (INIS)

    Pryor, R.J.; Cline, D.D.

    1993-01-01

    A method of solving the two-phases fluid flow equations using a genetic algorithm on a NCUBE multiprocessor computer is presented. The topics discussed are the two-phase flow equations, the genetic representation of the unkowns, the fitness function, the genetic operators, and the implementation of the algorithm on the NCUBE computer. The efficiency of the implementation is investigated using a pipe blowdown problem. Effects of varying the genetic parameters and the number of processors are presented. (orig.)

  9. Iterative methods for the detection of Hopf bifurcations in finite element discretisation of incompressible flow problems

    International Nuclear Information System (INIS)

    Cliffe, K.A.; Garratt, T.J.; Spence, A.

    1992-03-01

    This paper is concerned with the problem of computing a small number of eigenvalues of large sparse generalised eigenvalue problems arising from mixed finite element discretisations of time dependent equations modelling viscous incompressible flow. The eigenvalues of importance are those with smallest real part and can be used in a scheme to determine the stability of steady state solutions and to detect Hopf bifurcations. We introduce a modified Cayley transform of the generalised eigenvalue problem which overcomes a drawback of the usual Cayley transform applied to such problems. Standard iterative methods are then applied to the transformed eigenvalue problem to compute approximations to the eigenvalue of smallest real part. Numerical experiments are performed using a model of double diffusive convection. (author)

  10. The Inhibiting Bisection Problem

    Energy Technology Data Exchange (ETDEWEB)

    Pinar, Ali; Fogel, Yonatan; Lesieutre, Bernard

    2006-12-18

    Given a graph where each vertex is assigned a generation orconsumption volume, we try to bisect the graph so that each part has asignificant generation/consumption mismatch, and the cutsize of thebisection is small. Our motivation comes from the vulnerability analysisof distribution systems such as the electric power system. We show thatthe constrained version of the problem, where we place either the cutsizeor the mismatch significance as a constraint and optimize the other, isNP-complete, and provide an integer programming formulation. We alsopropose an alternative relaxed formulation, which can trade-off betweenthe two objectives and show that the alternative formulation of theproblem can be solved in polynomial time by a maximum flow solver. Ourexperiments with benchmark electric power systems validate theeffectiveness of our methods.

  11. RISKS INDUCED BY MAXIMUM FLOW WITH 1% PROBABILITY AND THEIR EFFECT ON SEVERAL SPECIES AND HABITATS IN PRICOP-HUTA-CERTEZE AND UPPER TISA NATURA 2000 PROTECTED AREAS

    Directory of Open Access Journals (Sweden)

    GH. ŞERBAN

    2016-03-01

    Full Text Available The purpose of the paper is to identify and locate some species related to habitats from Pricop-Huta-Certeze and Upper Tisa Natura 2000 Protected Areas (PHCTS and to determine if they are vulnerable to risks induced by maximum flow phases. In the first chapter are mentioned few references about the morphometric parameters of the hydrographic networks within the study area, as well as some references related to the maximum flow phases frequency. After the second chapter, where methods and databases used in the study are described, we proceed to the identification of the areas that are covered by water during flood, as well as determining the risk level related to these areas. The GIS modeling reveals small extent of the flood high risk for natural environment related to protected areas and greater extent for the anthropic environment. The last chapter refers to several species of fish and batrachia, as well as to those amphibious mammals identified in the study area that are vulnerable to floods (high turbidity effect, reduction of dissolved oxygen quantity, habitats destruction etc..

  12. On the modelling of compressible inviscid flow problems using AUSM schemes

    Directory of Open Access Journals (Sweden)

    Hajžman M.

    2007-11-01

    Full Text Available During last decades, upwind schemes have become a popular method in the field of computational fluid dynamics. Although they are only first order accurate, AUSM (Advection Upstream Splitting Method schemes proved to be well suited for modelling of compressible flows due to their robustness and ability of capturing shock discontinuities. In this paper, we review the composition of the AUSM flux-vector splitting scheme and its improved version noted AUSM+, proposed by Liou, for the solution of the Euler equations. Mach number splitting functions operating with values from adjacent cells are used to determine numerical convective fluxes and pressure splitting is used for the evaluation of numerical pressure fluxes. Both versions of the AUSM scheme are applied for solving some test problems such as one-dimensional shock tube problem and three dimensional GAMM channel. Features of the schemes are discussed in comparison with some explicit central schemes of the first order accuracy (Lax-Friedrichs and of the second order accuracy (MacCormack.

  13. Optimasi Penjadwalan Pengerjaan Software Pada Software House Dengan Flow-Shop Problem Menggunakan Artificial Bee Colony

    Directory of Open Access Journals (Sweden)

    Muhammad Fhadli

    2016-12-01

    This research proposed an implementation related to software execution scheduling process at a software house with Flow-Shop Problem (FSP using Artificial Bee Colony (ABC algorithm. Which in FSP required a solution to complete some job/task along with its overall cost at a minimum. There is a constraint that should be kept to note in this research, that is the uncertainty completion time of its jobs. In this research, we will present a solution that is a sequence order of project execution with its overall completion time at a minimum. An experiment will be performed with 3 attempts on each experiment conditions, that is an experiment of iteration parameter and experiment of limit parameter. From this experiment, we concluded that the use of this algorithm explained in this paper can reduce project execution time if we increase the value of total iteration and total colony. Keywords: optimization, flow-shop problem, artificial bee colony, swarm intelligence, meta-heuristic.

  14. Complementary Constrains on Component based Multiphase Flow Problems, Should It Be Implemented Locally or Globally?

    Science.gov (United States)

    Shao, H.; Huang, Y.; Kolditz, O.

    2015-12-01

    Multiphase flow problems are numerically difficult to solve, as it often contains nonlinear Phase transition phenomena A conventional technique is to introduce the complementarity constraints where fluid properties such as liquid saturations are confined within a physically reasonable range. Based on such constraints, the mathematical model can be reformulated into a system of nonlinear partial differential equations coupled with variational inequalities. They can be then numerically handled by optimization algorithms. In this work, two different approaches utilizing the complementarity constraints based on persistent primary variables formulation[4] are implemented and investigated. The first approach proposed by Marchand et.al[1] is using "local complementary constraints", i.e. coupling the constraints with the local constitutive equations. The second approach[2],[3] , namely the "global complementary constrains", applies the constraints globally with the mass conservation equation. We will discuss how these two approaches are applied to solve non-isothermal componential multiphase flow problem with the phase change phenomenon. Several benchmarks will be presented for investigating the overall numerical performance of different approaches. The advantages and disadvantages of different models will also be concluded. References[1] E.Marchand, T.Mueller and P.Knabner. Fully coupled generalized hybrid-mixed finite element approximation of two-phase two-component flow in porous media. Part I: formulation and properties of the mathematical model, Computational Geosciences 17(2): 431-442, (2013). [2] A. Lauser, C. Hager, R. Helmig, B. Wohlmuth. A new approach for phase transitions in miscible multi-phase flow in porous media. Water Resour., 34,(2011), 957-966. [3] J. Jaffré, and A. Sboui. Henry's Law and Gas Phase Disappearance. Transp. Porous Media. 82, (2010), 521-526. [4] A. Bourgeat, M. Jurak and F. Smaï. Two-phase partially miscible flow and transport modeling in

  15. The maximum entropy method of moments and Bayesian probability theory

    Science.gov (United States)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  16. Physics of flow in weighted complex networks

    Science.gov (United States)

    Wu, Zhenhua

    This thesis uses concepts from statistical physics to understand the physics of flow in weighted complex networks. The traditional model for random networks is the Erdoḧs-Renyi (ER.) network, where a network of N nodes is created by connecting each of the N(N - 1)/2 pairs of nodes with a probability p. The degree distribution, which is the probability distribution of the number of links per node, is a Poisson distribution. Recent studies of the topology in many networks such as the Internet and the world-wide airport network (WAN) reveal a power law degree distribution, known as a scale-free (SF) distribution. To yield a better description of network dynamics, we study weighted networks, where each link or node is given a number. One asks how the weights affect the static and the dynamic properties of the network. In this thesis, two important dynamic problems are studied: the current flow problem, described by Kirchhoff's laws, and the maximum flow problem, which maximizes the flow between two nodes. Percolation theory is applied to these studies of the dynamics in complex networks. We find that the current flow in disordered media belongs to the same universality class as the optimal path. In a randomly weighted network, we identify the infinite incipient percolation cluster as the "superhighway", which contains most of the traffic in a network. We propose an efficient strategy to improve significantly the global transport by improving the superhighways, which comprise a small fraction of the network. We also propose a network model with correlated weights to describe weighted networks such as the WAN. Our model agrees with WAN data, and provides insight into the advantages of correlated weights in networks. Lastly, the upper critical dimension is evaluated using two different numerical methods, and the result is consistent with the theoretical prediction.

  17. Maximum Entropy Method in Moessbauer Spectroscopy - a Problem of Magnetic Texture

    International Nuclear Information System (INIS)

    Satula, D.; Szymanski, K.; Dobrzynski, L.

    2011-01-01

    A reconstruction of the three dimensional distribution of the hyperfine magnetic field, isomer shift and texture parameter z from the Moessbauer spectra by the maximum entropy method is presented. The method was tested on the simulated spectrum consisting of two Gaussian hyperfine field distributions with different values of the texture parameters. It is shown that proper prior has to be chosen in order to arrive at the physically meaningful results. (authors)

  18. On a boundary layer problem related to the gas flow in shales

    KAUST Repository

    Barenblatt, G. I.

    2013-01-16

    The development of gas deposits in shales has become a significant energy resource. Despite the already active exploitation of such deposits, a mathematical model for gas flow in shales does not exist. Such a model is crucial for optimizing the technology of gas recovery. In the present article, a boundary layer problem is formulated and investigated with respect to gas recovery from porous low-permeability inclusions in shales, which are the basic source of gas. Milton Van Dyke was a great master in the field of boundary layer problems. Dedicating this work to his memory, we want to express our belief that Van Dyke\\'s profound ideas and fundamental book Perturbation Methods in Fluid Mechanics (Parabolic Press, 1975) will live on-also in fields very far from the subjects for which they were originally invented. © 2013 US Government.

  19. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.

    Science.gov (United States)

    Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz

    2017-10-01

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.

  20. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  1. Maximum neutron flux in thermal reactors; Maksimum neutronskog fluksa kod termalnih reaktora

    Energy Technology Data Exchange (ETDEWEB)

    Strugar, P V [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Yugoslavia)

    1968-07-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples.

  2. Modeling of unified power quality conditioner (UPQC) in distribution systems load flow

    International Nuclear Information System (INIS)

    Hosseini, M.; Shayanfar, H.A.; Fotuhi-Firuzabad, M.

    2009-01-01

    This paper presents modeling of unified power quality conditioner (UPQC) in load flow calculations for steady-state voltage compensation. An accurate model for this device is derived to use in load flow calculations. The rating of this device as well as direction of reactive power injection required to compensate voltage to the desired value (1 p.u.) is derived and discussed analytically and mathematically using phasor diagram method. Since performance of the compensator varies when it reaches to its maximum capacity, modeling of UPQC in its maximum rating of reactive power injection is derived. The validity of the proposed model is examined using two standard distribution systems consisting of 33 and 69 nodes, respectively. The best location of UPQC for under voltage problem mitigation in the distribution network is determined. The results show the validity of the proposed model for UPQC in large distribution systems.

  3. Modeling of unified power quality conditioner (UPQC) in distribution systems load flow

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, M.; Shayanfar, H.A. [Center of Excellence for Power System Automation and Operation, Department of Electrical Engineering, Iran University of Science and Technology, Tehran (Iran); Fotuhi-Firuzabad, M. [Department of Electrical Engineering, Sharif University of Technology, Tehran (Iran)

    2009-06-15

    This paper presents modeling of unified power quality conditioner (UPQC) in load flow calculations for steady-state voltage compensation. An accurate model for this device is derived to use in load flow calculations. The rating of this device as well as direction of reactive power injection required to compensate voltage to the desired value (1 p.u.) is derived and discussed analytically and mathematically using phasor diagram method. Since performance of the compensator varies when it reaches to its maximum capacity, modeling of UPQC in its maximum rating of reactive power injection is derived. The validity of the proposed model is examined using two standard distribution systems consisting of 33 and 69 nodes, respectively. The best location of UPQC for under voltage problem mitigation in the distribution network is determined. The results show the validity of the proposed model for UPQC in large distribution systems. (author)

  4. A new quantum inspired chaotic artificial bee colony algorithm for optimal power flow problem

    International Nuclear Information System (INIS)

    Yuan, Xiaohui; Wang, Pengtao; Yuan, Yanbin; Huang, Yuehua; Zhang, Xiaopan

    2015-01-01

    Highlights: • Quantum theory is introduced to artificial bee colony algorithm (ABC) to increase population diversity. • A chaotic local search operator is used to enhance local search ability of ABC. • Quantum inspired chaotic ABC method (QCABC) is proposed to solve optimal power flow. • The feasibility and effectiveness of the proposed QCABC is verified by examples. - Abstract: This paper proposes a new artificial bee colony algorithm with quantum theory and the chaotic local search strategy (QCABC), and uses it to solve the optimal power flow (OPF) problem. Under the quantum computing theory, the QCABC algorithm encodes each individual with quantum bits to form a corresponding quantum bit string. By determining each quantum bits value, we can get the value of the individual. After the scout bee stage of the artificial bee colony algorithm, we begin the chaotic local search in the vicinity of the best individual found so far. Finally, the quantum rotation gate is used to process each quantum bit so that all individuals can update toward the direction of the best individual. The QCABC algorithm is carried out to deal with the OPF problem in the IEEE 30-bus and IEEE 118-bus standard test systems. The results of the QCABC algorithm are compared with other algorithms (artificial bee colony algorithm, genetic algorithm, particle swarm optimization algorithm). The comparison shows that the QCABC algorithm can effectively solve the OPF problem and it can get the better optimal results than those of other algorithms

  5. Flow regimes

    International Nuclear Information System (INIS)

    Kh'yuitt, G.

    1980-01-01

    An introduction into the problem of two-phase flows is presented. Flow regimes arizing in two-phase flows are described, and classification of these regimes is given. Structures of vertical and horizontal two-phase flows and a method of their identification using regime maps are considered. The limits of this method application are discussed. The flooding phenomena and phenomena of direction change (flow reversal) of the flow and interrelation of these phenomena as well as transitions from slug regime to churn one and from churn one to annular one in vertical flows are described. Problems of phase transitions and equilibrium are discussed. Flow regimes in tubes where evaporating liquid is running, are described [ru

  6. On minimizing the maximum broadcast decoding delay for instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.; Sorour, Sameh; Alouini, Mohamed-Slim; Ai-Naffouri, Tareq Y.

    2014-01-01

    In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a

  7. Sufficient Stochastic Maximum Principle in a Regime-Switching Diffusion Model

    Energy Technology Data Exchange (ETDEWEB)

    Donnelly, Catherine, E-mail: C.Donnelly@hw.ac.uk [Heriot-Watt University, Department of Actuarial Mathematics and Statistics (United Kingdom)

    2011-10-15

    We prove a sufficient stochastic maximum principle for the optimal control of a regime-switching diffusion model. We show the connection to dynamic programming and we apply the result to a quadratic loss minimization problem, which can be used to solve a mean-variance portfolio selection problem.

  8. Sufficient Stochastic Maximum Principle in a Regime-Switching Diffusion Model

    International Nuclear Information System (INIS)

    Donnelly, Catherine

    2011-01-01

    We prove a sufficient stochastic maximum principle for the optimal control of a regime-switching diffusion model. We show the connection to dynamic programming and we apply the result to a quadratic loss minimization problem, which can be used to solve a mean-variance portfolio selection problem.

  9. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  10. Scale problems in assessment of hydrogeological parameters of groundwater flow models

    Science.gov (United States)

    Nawalany, Marek; Sinicyn, Grzegorz

    2015-09-01

    An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.

  11. Scale problems in assessment of hydrogeological parameters of groundwater flow models

    Directory of Open Access Journals (Sweden)

    Nawalany Marek

    2015-09-01

    Full Text Available An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i spatial extent and geometry of hydrogeological system, (ii spatial continuity and granularity of both natural and man-made objects within the system, (iii duration of the system and (iv continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale – scale of pores, meso-scale – scale of laboratory sample, macro-scale – scale of typical blocks in numerical models of groundwater flow, local-scale – scale of an aquifer/aquitard and regional-scale – scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here. Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.

  12. Solving phase appearance/disappearance two-phase flow problems with high resolution staggered grid and fully implicit schemes by the Jacobian-free Newton–Krylov Method

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-04-01

    The phase appearance/disappearance issue presents serious numerical challenges in two-phase flow simulations. Many existing reactor safety analysis codes use different kinds of treatments for the phase appearance/disappearance problem. However, to our best knowledge, there are no fully satisfactory solutions. Additionally, the majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many situations, it is desirable to use high-resolution spatial discretization and fully implicit time integration schemes to reduce numerical errors. In this work, we adapted a high-resolution spatial discretization scheme on staggered grid mesh and fully implicit time integration methods (such as BDF1 and BDF2) to solve the two-phase flow problems. The discretized nonlinear system was solved by the Jacobian-free Newton Krylov (JFNK) method, which does not require the derivation and implementation of analytical Jacobian matrix. These methods were tested with a few two-phase flow problems with phase appearance/disappearance phenomena considered, such as a linear advection problem, an oscillating manometer problem, and a sedimentation problem. The JFNK method demonstrated extremely robust and stable behaviors in solving the two-phase flow problems with phase appearance/disappearance. No special treatments such as water level tracking or void fraction limiting were used. High-resolution spatial discretization and second- order fully implicit method also demonstrated their capabilities in significantly reducing numerical errors.

  13. SWIFT, 3-D Fluid Flow, Heat Transfer, Decay Chain Transport in Geological Media

    International Nuclear Information System (INIS)

    Cranwell, R.M.; Reeves, M.

    2003-01-01

    1 - Description of problem or function: SWIFT solves the coupled or individual equations governing fluid flow, heat transport, brine displacement, and radionuclide displacement in geologic media. Fluid flow may be transient or steady-state. One, two, or three dimensions are available and transport of radionuclides chains is possible. 4. Method of solution: Finite differencing is used to discretize the partial differential equations in space and time. The user may choose centered or backward spatial differencing, coupled with either central or backward temporal differencing. The matrix equations may be solved iteratively (two line successive-over-relaxation) or directly (special matrix banding and Gaussian elimination). 5. Restrictions on the complexity of the problem: On the CDC7600 in direct solution mode, the maximum number of grid blocks allowed is approximately 1400

  14. Solving the minimum flow problem with interval bounds and flows

    Indian Academy of Sciences (India)

    ... with crisp data. In this paper, the idea of Ghiyasvand was extended for solving the minimum flow problem with interval-valued lower, upper bounds and flows. This problem can be solved using two minimum flow problems with crisp data. Then, this result is extended to networks with fuzzy lower, upper bounds and flows.

  15. Nested sparse grid collocation method with delay and transformation for subsurface flow and transport problems

    Science.gov (United States)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-06-01

    In numerical modeling of subsurface flow and transport problems, formation properties may not be deterministically characterized, which leads to uncertainty in simulation results. In this study, we propose a sparse grid collocation method, which adopts nested quadrature rules with delay and transformation to quantify the uncertainty of model solutions. We show that the nested Kronrod-Patterson-Hermite quadrature is more efficient than the unnested Gauss-Hermite quadrature. We compare the convergence rates of various quadrature rules including the domain truncation and domain mapping approaches. To further improve accuracy and efficiency, we present a delayed process in selecting quadrature nodes and a transformed process for approximating unsmooth or discontinuous solutions. The proposed method is tested by an analytical function and in one-dimensional single-phase and two-phase flow problems with different spatial variances and correlation lengths. An additional example is given to demonstrate its applicability to three-dimensional black-oil models. It is found from these examples that the proposed method provides a promising approach for obtaining satisfactory estimation of the solution statistics and is much more efficient than the Monte-Carlo simulations.

  16. Study of microvascular non-Newtonian blood flow modulated by electroosmosis.

    Science.gov (United States)

    Tripathi, Dharmendra; Yadav, Ashu; Anwar Bég, O; Kumar, Rakesh

    2018-05-01

    An analytical study of microvascular non-Newtonian blood flow is conducted incorporating the electro-osmosis phenomenon. Blood is considered as a Bingham rheological aqueous ionic solution. An externally applied static axial electrical field is imposed on the system. The Poisson-Boltzmann equation for electrical potential distribution is implemented to accommodate the electrical double layer in the microvascular regime. With long wavelength, lubrication and Debye-Hückel approximations, the boundary value problem is rendered non-dimensional. Analytical solutions are derived for the axial velocity, volumetric flow rate, pressure gradient, volumetric flow rate, averaged volumetric flow rate along one time period, pressure rise along one wavelength and stream function. A plug swidth is featured in the solutions. Via symbolic software (Mathematica), graphical plots are generated for the influence of Bingham plug flow width parameter, electrical Debye length and Helmholtz-Smoluchowski velocity (maximum electro-osmotic velocity) on the key hydrodynamic variables. This study reveals that blood flow rate accelerates with decreasing the plug width (i.e. viscoplastic nature of fluids) and also with increasing the Debye length parameter. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Debris flow-induced topographic changes: effects of recurrent debris flow initiation.

    Science.gov (United States)

    Chen, Chien-Yuan; Wang, Qun

    2017-08-12

    Chushui Creek in Shengmu Village, Nantou County, Taiwan, was analyzed for recurrent debris flow using numerical modeling and geographic information system (GIS) spatial analysis. The two-dimensional water flood and mudflow simulation program FLO-2D were used to simulate debris flow induced by rainfall during typhoon Herb in 1996 and Mindulle in 2004. Changes in topographic characteristics after the debris flows were simulated for the initiation of hydrological characteristics, magnitude, and affected area. Changes in topographic characteristics included those in elevation, slope, aspect, stream power index (SPI), topographic wetness index (TWI), and hypsometric curve integral (HI), all of which were analyzed using GIS spatial analysis. The results show that the SPI and peak discharge in the basin increased after a recurrence of debris flow. The TWI was higher in 2003 than in 2004 and indicated higher potential of landslide initiation when the slope of the basin was steeper. The HI revealed that the basin was in its mature stage and was shifting toward the old stage. Numerical simulation demonstrated that the parameters' mean depth, maximum depth, affected area, mean flow rate, maximum flow rate, and peak flow discharge were increased after recurrent debris flow, and peak discharge occurred quickly.

  18. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  19. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  20. A Modified Levenberg-Marquardt Method for Nonsmooth Equations with Finitely Many Maximum Functions

    Directory of Open Access Journals (Sweden)

    Shou-qiang Du

    2008-01-01

    Full Text Available For solving nonsmooth systems of equations, the Levenberg-Marquardt method and its variants are of particular importance because of their locally fast convergent rates. Finitely many maximum functions systems are very useful in the study of nonlinear complementarity problems, variational inequality problems, Karush-Kuhn-Tucker systems of nonlinear programming problems, and many problems in mechanics and engineering. In this paper, we present a modified Levenberg-Marquardt method for nonsmooth equations with finitely many maximum functions. Under mild assumptions, the present method is shown to be convergent Q-linearly. Some numerical results comparing the proposed method with classical reformulations indicate that the modified Levenberg-Marquardt algorithm works quite well in practice.

  1. Solving global problem by considering multitude of local problems: Application to fluid flow in anisotropic porous media using the multipoint flux approximation

    KAUST Repository

    Salama, Amgad; Sun, Shuyu; Wheeler, Mary Fanett

    2014-01-01

    In this work we apply the experimenting pressure field approach to the numerical solution of the single phase flow problem in anisotropic porous media using the multipoint flux approximation. We apply this method to the problem of flow in saturated anisotropic porous media. In anisotropic media the component flux representation requires, generally multiple pressure values in neighboring cells (e.g., six pressure values of the neighboring cells is required in two-dimensional rectangular meshes). This apparently results in the need for a nine points stencil for the discretized pressure equation (27 points stencil in three-dimensional rectangular mesh). The coefficients associated with the discretized pressure equation are complex and require longer expressions which make their implementation prone to errors. In the experimenting pressure field technique, the matrix of coefficients is generated automatically within the solver. A set of predefined pressure fields is operated on the domain through which the velocity field is obtained. Apparently such velocity fields do not satisfy the mass conservation equations entailed by the source/sink term and boundary conditions from which the residual is calculated. In this method the experimenting pressure fields are designed such that the residual reduces to the coefficients of the pressure equation matrix. © 2014 Elsevier B.V. All rights reserved.

  2. Solving global problem by considering multitude of local problems: Application to fluid flow in anisotropic porous media using the multipoint flux approximation

    KAUST Repository

    Salama, Amgad

    2014-09-01

    In this work we apply the experimenting pressure field approach to the numerical solution of the single phase flow problem in anisotropic porous media using the multipoint flux approximation. We apply this method to the problem of flow in saturated anisotropic porous media. In anisotropic media the component flux representation requires, generally multiple pressure values in neighboring cells (e.g., six pressure values of the neighboring cells is required in two-dimensional rectangular meshes). This apparently results in the need for a nine points stencil for the discretized pressure equation (27 points stencil in three-dimensional rectangular mesh). The coefficients associated with the discretized pressure equation are complex and require longer expressions which make their implementation prone to errors. In the experimenting pressure field technique, the matrix of coefficients is generated automatically within the solver. A set of predefined pressure fields is operated on the domain through which the velocity field is obtained. Apparently such velocity fields do not satisfy the mass conservation equations entailed by the source/sink term and boundary conditions from which the residual is calculated. In this method the experimenting pressure fields are designed such that the residual reduces to the coefficients of the pressure equation matrix. © 2014 Elsevier B.V. All rights reserved.

  3. An improved sheep flock heredity algorithm for job shop scheduling and flow shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Chandramouli Anandaraman

    2011-10-01

    Full Text Available Job Shop Scheduling Problem (JSSP and Flow Shop Scheduling Problem (FSSP are strong NP-complete combinatorial optimization problems among class of typical production scheduling problems. An improved Sheep Flock Heredity Algorithm (ISFHA is proposed in this paper to find a schedule of operations that can minimize makespan. In ISFHA, the pairwise mutation operation is replaced by a single point mutation process with a probabilistic property which guarantees the feasibility of the solutions in the local search domain. A Robust-Replace (R-R heuristic is introduced in place of chromosomal crossover to enhance the global search and to improve the convergence. The R-R heuristic is found to enhance the exploring potential of the algorithm and enrich the diversity of neighborhoods. Experimental results reveal the effectiveness of the proposed algorithm, whose optimization performance is markedly superior to that of genetic algorithms and is comparable to the best results reported in the literature.

  4. Dynamic analysis of pedestrian crossing behaviors on traffic flow at unsignalized mid-block crosswalks

    Science.gov (United States)

    Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping

    2015-05-01

    It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.

  5. The `Henry Problem' of `density-driven' groundwater flow versus Tothian `groundwater flow systems' with variable density: A review of the influential Biscayne aquifer data.

    Science.gov (United States)

    Weyer, K. U.

    2017-12-01

    Coastal groundwater flow investigations at the Biscayne Bay, south of Miami, Florida, gave rise to the concept of density-driven flow of seawater into coastal aquifers creating a saltwater wedge. Within that wedge, convection-driven return flow of seawater and a dispersion zone were assumed by Cooper et al. (1964) to be the cause of the Biscayne aquifer `sea water wedge'. This conclusion was based on the chloride distribution within the aquifer and on an analytical model concept assuming convection flow within a confined aquifer without taking non-chemical field data into consideration. This concept was later labelled the `Henry Problem', which any numerical variable density flow program must be able to simulate to be considered acceptable. Both, `density-driven flow' and Tothian `groundwater flow systems' (with or without variable density conditions) are driven by gravitation. The difference between the two are the boundary conditions. 'Density-driven flow' occurs under hydrostatic boundary conditions while Tothian `groundwater flow systems' occur under hydrodynamic boundary conditions. Revisiting the Cooper et al. (1964) publication with its record of piezometric field data (heads) showed that the so-called sea water wedge has been caused by discharging deep saline groundwater driven by gravitational flow and not by denser sea water. Density driven flow of seawater into the aquifer was not found reflected in the head measurements for low and high tide conditions which had been taken contemporaneously with the chloride measurements. These head measurements had not been included in the flow interpretation. The very same head measurements indicated a clear dividing line between shallow local fresh groundwater flow and saline deep groundwater flow without the existence of a dispersion zone or a convection cell. The Biscayne situation emphasizes the need for any chemical interpretation of flow pattern to be supported by head data as energy indicators of flow fields

  6. An off-line dual maximum resource bin packing model for solving the maintenance problem in the aviation industry

    Directory of Open Access Journals (Sweden)

    George Cristian Gruia

    2013-05-01

    Full Text Available In the aviation industry, propeller motor engines have a lifecycle of several thousand hours of flight and the maintenance is an important part of their lifecycle. The present article considers a multi-resource, priority-based case scheduling problem, which is applied in a Romanian manufacturing company, that repairs and maintains helicopter and airplane engines at a certain quality level imposed by the aviation standards. Given a reduced budget constraint, the management’s goal is to maximize the utilization of their resources (financial, material, space, workers, by maintaining a prior known priority rule. An Off-Line Dual Maximum Resource Bin Packing model, based on a Mixed Integer Programming model is thus presented. The obtained results show an increase with approx. 25% of the Just in Time shipping of the engines to the customers and approx. 12,5% increase in the utilization of the working area.

  7. AN OFF-LINE DUAL MAXIMUM RESOURCE BIN PACKING MODEL FOR SOLVING THE MAINTENANCE PROBLEM IN THE AVIATION INDUSTRY

    Directory of Open Access Journals (Sweden)

    GEORGE CRISTIAN GRUIA

    2013-05-01

    Full Text Available In the aviation industry, propeller motor engines have a lifecycle of several thousand hours of flight and the maintenance is an important part of their lifecycle. The present article considers a multi-resource, priority-based case scheduling problem, which is applied in a Romanian manufacturing company, that repairs and maintains helicopter and airplane engines at a certain quality level imposed by the aviation standards. Given a reduced budget constraint, the management’s goal is to maximize the utilization of their resources (financial, material, space, workers, by maintaining a prior known priority rule. An Off-Line Dual Maximum Resource Bin Packing model, based on a Mixed Integer Programing model is thus presented. The obtained results show an increase with approx. 25% of the Just in Time shipping of the engines to the customers and approx. 12,5% increase in the utilization of the working area.

  8. Deconvolution in the presence of noise using the Maximum Entropy Principle

    International Nuclear Information System (INIS)

    Steenstrup, S.

    1984-01-01

    The main problem in deconvolution in the presence of noise is the nonuniqueness. This problem is overcome by the application of the Maximum Entropy Principle. The way the noise enters in the formulation of the problem is examined in some detail and the final equations are derived such that the necessary assumptions becomes explicit. Examples using X-ray diffraction data are shown. (orig.)

  9. A MODIFIED DECOMPOSITION METHOD FOR SOLVING NONLINEAR PROBLEM OF FLOW IN CONVERGING- DIVERGING CHANNEL

    Directory of Open Access Journals (Sweden)

    MOHAMED KEZZAR

    2015-08-01

    Full Text Available In this research, an efficient technique of computation considered as a modified decomposition method was proposed and then successfully applied for solving the nonlinear problem of the two dimensional flow of an incompressible viscous fluid between nonparallel plane walls. In fact this method gives the nonlinear term Nu and the solution of the studied problem as a power series. The proposed iterative procedure gives on the one hand a computationally efficient formulation with an acceleration of convergence rate and on the other hand finds the solution without any discretization, linearization or restrictive assumptions. The comparison of our results with those of numerical treatment and other earlier works shows clearly the higher accuracy and efficiency of the used Modified Decomposition Method.

  10. A Hybrid Quantum Evolutionary Algorithm with Improved Decoding Scheme for a Robotic Flow Shop Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Weidong Lei

    2017-01-01

    Full Text Available We aim at solving the cyclic scheduling problem with a single robot and flexible processing times in a robotic flow shop, which is a well-known optimization problem in advanced manufacturing systems. The objective of the problem is to find an optimal robot move sequence such that the throughput rate is maximized. We propose a hybrid algorithm based on the Quantum-Inspired Evolutionary Algorithm (QEA and genetic operators for solving the problem. The algorithm integrates three different decoding strategies to convert quantum individuals into robot move sequences. The Q-gate is applied to update the states of Q-bits in each individual. Besides, crossover and mutation operators with adaptive probabilities are used to increase the population diversity. A repairing procedure is proposed to deal with infeasible individuals. Comparison results on both benchmark and randomly generated instances demonstrate that the proposed algorithm is more effective in solving the studied problem in terms of solution quality and computational time.

  11. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    2008-01-01

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm...

  12. Optimal control of a double integrator a primer on maximum principle

    CERN Document Server

    Locatelli, Arturo

    2017-01-01

    This book provides an introductory yet rigorous treatment of Pontryagin’s Maximum Principle and its application to optimal control problems when simple and complex constraints act on state and control variables, the two classes of variable in such problems. The achievements resulting from first-order variational methods are illustrated with reference to a large number of problems that, almost universally, relate to a particular second-order, linear and time-invariant dynamical system, referred to as the double integrator. The book is ideal for students who have some knowledge of the basics of system and control theory and possess the calculus background typically taught in undergraduate curricula in engineering. Optimal control theory, of which the Maximum Principle must be considered a cornerstone, has been very popular ever since the late 1950s. However, the possibly excessive initial enthusiasm engendered by its perceived capability to solve any kind of problem gave way to its equally unjustified rejecti...

  13. Maximum Urine Flow Rate of Less than 15ml/Sec Increasing Risk of Urine Retention and Prostate Surgery among Patients with Alpha-1 Blockers: A 10-Year Follow Up Study.

    Directory of Open Access Journals (Sweden)

    Hsin-Ho Liu

    Full Text Available The aim of this study was to determine the subsequent risk of acute urine retention and prostate surgery in patients receiving alpha-1 blockers treatment and having a maximum urinary flow rate of less than 15ml/sec.We identified patients who were diagnosed with benign prostate hyperplasia (BPH and had a maximum uroflow rate of less than 15ml/sec between 1 January, 2002 to 31 December, 2011 from Taiwan's National Health Insurance Research Database into study group (n = 303. The control cohort included four BPH/LUTS patients without 5ARI used for each study group, randomly selected from the same dataset (n = 1,212. Each patient was monitored to identify those who subsequently developed prostate surgery and acute urine retention.Prostate surgery and acute urine retention are detected in 5.9% of control group and 8.3% of study group during 10-year follow up. Compared with the control group, there was increase in the risk of prostate surgery and acute urine retention in the study group (HR = 1.83, 95% CI: 1.16 to 2.91 after adjusting for age, comorbidities, geographic region and socioeconomic status.Maximum urine flow rate of less than 15ml/sec is a risk factor of urinary retention and subsequent prostate surgery in BPH patients receiving alpha-1 blocker therapy. This result can provide a reference for clinicians.

  14. Thermophysical analysis for three-dimensional MHD stagnation-point flow of nano-material influenced by an exponential stretching surface

    Directory of Open Access Journals (Sweden)

    Fiaz Ur Rehman

    2018-03-01

    Full Text Available In the present paper a theoretical investigation is performed to analyze heat and mass transport enhancement of water-based nanofluid for three dimensional (3D MHD stagnation-point flow caused by an exponentially stretched surface. Water is considered as a base fluid. There are three (3 types of nanoparticles considered in this study namely, CuO (Copper oxide, Fe3O4 (Magnetite, and Al2O3 (Alumina are considered along with water. In this problem we invoked the boundary layer phenomena and suitable similarity transformation, as a result our three dimensional non-linear equations of describing current problem are transmuted into nonlinear and non-homogeneous differential equations involving ordinary derivatives. We solved the final equations by applying homotopy analysis technique. Influential outcomes of aggressing parameters involved in this study, effecting profiles of temperature field and velocity are explained in detail. Graphical results of involved parameters appearing in considered nanofluid are presented separately. It is worth mentioning that Skin-friction along x and y-direction is maximum for Copper oxide-water nanofluid and minimum for Alumina-water nanofluid. Result for local Nusselt number is maximum for Copper oxide-water nanofluid and is minimum for magnetite-water nanofluid. Keywords: Heat transfer, Nanofluids, Stagnation-point flow, Three-dimensional flow, Nano particles, Boundary layer

  15. On the solution of fluid flow and heat transfer problem in a 2D channel with backward-facing step

    Directory of Open Access Journals (Sweden)

    Alexander A. Fomin

    2017-06-01

    Full Text Available The stable stationary solutions of the test problem of hydrodynamics and heat transfer in a plane channel with the backward-facing step have been considered in the work for extremely high Reynolds numbers and expansion ratio of the stream $ER$. The problem has been solved by numerical integration of the 2D Navier–Stokes equations in ‘velocity-pressure’ formulation and the heat equation in the range of Reynolds number $500 \\leqslant \\mathrm{ Re} \\leqslant 3000$ and expansion ratio $1.43 \\leqslant ER \\leqslant 10$ for Prandtl number $\\mathrm{ Pr} = 0.71$. Validity of the results has been confirmed by comparing them with literature data. Detailed flow patterns, fields of stream overheating, and profiles of horizontal component of velocity and relative overheating of flow in the cross section of the channel have been presented. Complex behaviors of the coefficients of friction, hydrodynamic resistance and heat transfer (Nusselt number along the channel depending on the problem parameters have been analyzed.

  16. Direct maximum parsimony phylogeny reconstruction from genotype data.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-12-05

    Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  17. Direct maximum parsimony phylogeny reconstruction from genotype data

    Directory of Open Access Journals (Sweden)

    Ravi R

    2007-12-01

    Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  18. Augmentative effect of pulsatility on the wall shear stress in tube flow.

    Science.gov (United States)

    Nakata, M; Tatsumi, E; Tsukiya, T; Taenaka, Y; Nishimura, T; Nishinaka, T; Takano, H; Masuzawa, T; Ohba, K

    1999-08-01

    Wall shear stress (WSS) has been considered to play an important role in the physiological and metabolic functions of the vascular endothelial cells. We investigated the effects of the pulse rate and the maximum flow rate on the WSS to clarify the influence of pulsatility. Water was perfused in a 1/2 inch transparent straight cylinder with a nonpulsatile centrifugal pump and a pulsatile pneumatic ventricular assist device (VAD). In nonpulsatile flow (NF), the flow rate was changed 1 to 6 L/min by 1 L/min increments to obtain standard values of WSS at each flow rate. In pulsatile flow (PF), the pulse rate was controlled at 40, 60, and 80 bpm, and the maximum flow rate was varied from 3.3 to 12.0 L/min while the mean flow rate was kept at 3 L/min. The WSS was estimated from the velocity profile at measuring points using the laser illuminated fluorescence method. In NF, the WSS was 12.0 dyne/cm2 at 3 L/min and 33.0 dyne/cm2 at 6 L/min. In PF, the pulse rate change with the same mean, and the maximum flow rate did not affect WSS. On the other hand, the increase in the maximum flow rate at the constant mean flow rate of 3 L/min augmented the mean WSS from 13.1 to 32.9 dyne/cm2. We concluded that the maximum flow rate exerted a substantial augmentative effect on WSS, and the maximum flow rate was a dominant factor of pulsatility in this effect.

  19. Spectrum unfolding in X-ray spectrometry using the maximum entropy method

    International Nuclear Information System (INIS)

    Fernandez, Jorge E.; Scot, Viviana; Di Giulio, Eugenio

    2014-01-01

    The solution of the unfolding problem is an ever-present issue in X-ray spectrometry. The maximum entropy technique solves this problem by taking advantage of some known a priori physical information and by ensuring an outcome with only positive values. This method is implemented in MAXED (MAXimum Entropy Deconvolution), a software code contained in the package UMG (Unfolding with MAXED and GRAVEL) developed at PTB and distributed by NEA Data Bank. This package contains also the code GRAVEL (used to estimate the precision of the solution). This article introduces the new code UMESTRAT (Unfolding Maximum Entropy STRATegy) which applies a semi-automatic strategy to solve the unfolding problem by using a suitable combination of MAXED and GRAVEL for applications in X-ray spectrometry. Some examples of the use of UMESTRAT are shown, demonstrating its capability to remove detector artifacts from the measured spectrum consistently with the model used for the detector response function (DRF). - Highlights: ► A new strategy to solve the unfolding problem in X-ray spectrometry is presented. ► The presented strategy uses a suitable combination of the codes MAXED and GRAVEL. ► The applied strategy provides additional information on the Detector Response Function. ► The code UMESTRAT is developed to apply this new strategy in a semi-automatic mode

  20. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  1. Buoyancy-driven flow excursions in fuel assemblies

    International Nuclear Information System (INIS)

    Laurinat, J.E.; Paul, P.K.; Menna, J.D.

    1995-01-01

    A power limit criterion was developed for a postulated Loss of Pumping Accident (LOPA) in one of the recently shut down heavy water production reactors at the Savannah River Site. These reactors were cooled by recirculating moderator downward through channels in cylindrical fuel tubes. Powers were limited to prevent a flow excursion from occurring in one or more of these parallel channels. During full-power operation, limits prevented a boiling flow excursion from taking place. At low flow rates, during the addition of emergency cooling water, buoyant forces reverse the flow in one of the coolant channels before boiling occurs. As power increases beyond the point of flow reversal, the maximum wall temperature approaches the fluid saturation temperature, and a thermal excursion occurs. The power limit criterion for low flow rates was the onset of flow reversal. To determine conditions for flow reversal, tests were performed in a mock-up of a fuel assembly that contained two electrically heated concentric tubes surrounded by three flow channels. These tests were modeled using a finite difference thermal-hydraulic code. According to code calculations, flow reversed in the outer flow channel before the maximum wall temperature reached the local fluid saturation temperature. Thermal excursions occurred when the maximum wall temperature approximately equaled the saturation temperature. For a postulated LOPA, the flow reversal criterion for emergency cooling water addition was more limiting than the boiling excursion criterion for full power operation. This criterion limited powers to 37% of historical levels

  2. A Maximum Principle for SDEs of Mean-Field Type

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Daniel, E-mail: danieand@math.kth.se; Djehiche, Boualem, E-mail: boualem@math.kth.se [Royal Institute of Technology, Department of Mathematics (Sweden)

    2011-06-15

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  3. A Maximum Principle for SDEs of Mean-Field Type

    International Nuclear Information System (INIS)

    Andersson, Daniel; Djehiche, Boualem

    2011-01-01

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  4. On a multigrid method for the coupled Stokes and porous media flow problem

    Science.gov (United States)

    Luo, P.; Rodrigo, C.; Gaspar, F. J.; Oosterlee, C. W.

    2017-07-01

    The multigrid solution of coupled porous media and Stokes flow problems is considered. The Darcy equation as the saturated porous medium model is coupled to the Stokes equations by means of appropriate interface conditions. We focus on an efficient multigrid solution technique for the coupled problem, which is discretized by finite volumes on staggered grids, giving rise to a saddle point linear system. Special treatment is required regarding the discretization at the interface. An Uzawa smoother is employed in multigrid, which is a decoupled procedure based on symmetric Gauss-Seidel smoothing for velocity components and a simple Richardson iteration for the pressure field. Since a relaxation parameter is part of a Richardson iteration, Local Fourier Analysis (LFA) is applied to determine the optimal parameters. Highly satisfactory multigrid convergence is reported, and, moreover, the algorithm performs well for small values of the hydraulic conductivity and fluid viscosity, that are relevant for applications.

  5. A Flow Chart of Behavior Management Strategies for Families of Children with Co-Occurring Attention-Deficit Hyperactivity Disorder and Conduct Problem Behavior.

    Science.gov (United States)

    Danforth, Jeffrey S

    2016-03-01

    Behavioral parent training is an evidence-based treatment for problem behavior described as attention-deficit hyperactivity disorder (ADHD), oppositional defiant disorder, and conduct disorder. However, adherence to treatment fidelity and parent performance of the management skills remains an obstacle to optimum outcome. One variable that may limit the effectiveness of the parent training is that demanding behavior management procedures can be deceptively complicated and difficult to perform. Based on outcome research for families of children with co-occurring ADHD and conduct problem behavior, an example of a visual behavior management flow chart is presented. The flow chart may be used to help teach specific behavior management skills to parents. The flow chart depicts a chain of behavior management strategies taught with explanation, modeling, and role-play with parents. The chained steps in the flow chart are elements common to well-known evidence-based behavior management strategies, and perhaps, this depiction well serve as a setting event for other behavior analysts to create flow charts for their own parent training, Details of the flow chart steps, as well as examples of specific applications and program modifications conclude.

  6. Asymptotic stability of shear-flow solutions to incompressible viscous free boundary problems with and without surface tension

    Science.gov (United States)

    Tice, Ian

    2018-04-01

    This paper concerns the dynamics of a layer of incompressible viscous fluid lying above a rigid plane and with an upper boundary given by a free surface. The fluid is subject to a constant external force with a horizontal component, which arises in modeling the motion of such a fluid down an inclined plane, after a coordinate change. We consider the problem both with and without surface tension for horizontally periodic flows. This problem gives rise to shear-flow equilibrium solutions, and the main thrust of this paper is to study the asymptotic stability of the equilibria in certain parameter regimes. We prove that there exists a parameter regime in which sufficiently small perturbations of the equilibrium at time t=0 give rise to global-in-time solutions that return to equilibrium exponentially in the case with surface tension and almost exponentially in the case without surface tension. We also establish a vanishing surface tension limit, which connects the solutions with and without surface tension.

  7. Load flow optimization and optimal power flow

    CERN Document Server

    Das, J C

    2017-01-01

    This book discusses the major aspects of load flow, optimization, optimal load flow, and culminates in modern heuristic optimization techniques and evolutionary programming. In the deregulated environment, the economic provision of electrical power to consumers requires knowledge of maintaining a certain power quality and load flow. Many case studies and practical examples are included to emphasize real-world applications. The problems at the end of each chapter can be solved by hand calculations without having to use computer software. The appendices are devoted to calculations of line and cable constants, and solutions to the problems are included throughout the book.

  8. A hybrid flow shop model for an ice cream production scheduling problem

    Directory of Open Access Journals (Sweden)

    Imma Ribas Vila

    2009-07-01

    Full Text Available Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Taula normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} In this paper we address the scheduling problem that comes from an ice cream manufacturing company. This production system can be modelled as a three stage nowait hybrid flow shop with batch dependent setup costs. To contribute reducing the gap between theory and practice we have considered the real constraints and the criteria used by planners. The problem considered has been formulated as a mixed integer programming. Further, two competitive heuristic procedures have been developed and one of them will be proposed to schedule in the ice cream factory.

  9. Current opinion about maximum entropy methods in Moessbauer spectroscopy

    International Nuclear Information System (INIS)

    Szymanski, K

    2009-01-01

    Current opinion about Maximum Entropy Methods in Moessbauer Spectroscopy is presented. The most important advantage offered by the method is the correct data processing under circumstances of incomplete information. Disadvantage is the sophisticated algorithm and its application to the specific problems.

  10. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  11. Exploiting Maximum Entropy method and ASTER data for assessing debris flow and debris slide susceptibility for the Giampilieri catchment (north-eastern Sicily, Italy).

    KAUST Repository

    Lombardo, Luigi; Bachofer, F.; Cama, M.; Mä rker, M.; Rotigliano, E.

    2016-01-01

    This study aims at evaluating the performance of the Maximum Entropy method in assessing landslide susceptibility, exploiting topographic and multispectral remote sensing predictors. We selected the catchment of the Giampilieri stream, which is located in the north-eastern sector of Sicily (southern Italy), as test site. On 1/10/2009, a storm rainfall triggered in this area hundreds of debris flow/avalanche phenomena causing extensive economical damage and loss of life. Within this area a presence-only-based statistical method was applied to obtain susceptibility models capable of distinguish future activation sites of debris flow and debris slide, which where the main source failure mechanisms for flow or avalanche type propagation. The set of predictors used in this experiment comprised primary and secondary topographic attributes, derived by processing a high resolution digital elevation model, CORINE land cover data and a set of vegetation and mineral indices obtained by processing multispectral ASTER images. All the selected data sources are dated before the disaster. A spatially random partition technique was adopted for validation, generating fifty replicates for each of the two considered movement typologies in order to assess accuracy, precision and reliability of the models. The debris slide and debris flow susceptibility models produced high performances with the first type being the best fitted. The evaluation of the probability estimates around the mean value for each mapped pixel shows an inverted relation, with the most robust models corresponding to the debris flows. With respect to the role of each predictor within the modelling phase, debris flows appeared to be primarily controlled by topographic attributes whilst the debris slides were better explained by remotely sensed derived indices, particularly by the occurrence of previous wildfires across the slope. The overall excellent performances of the two models suggest promising perspectives for

  12. Exploiting Maximum Entropy method and ASTER data for assessing debris flow and debris slide susceptibility for the Giampilieri catchment (north-eastern Sicily, Italy).

    KAUST Repository

    Lombardo, Luigi

    2016-07-18

    This study aims at evaluating the performance of the Maximum Entropy method in assessing landslide susceptibility, exploiting topographic and multispectral remote sensing predictors. We selected the catchment of the Giampilieri stream, which is located in the north-eastern sector of Sicily (southern Italy), as test site. On 1/10/2009, a storm rainfall triggered in this area hundreds of debris flow/avalanche phenomena causing extensive economical damage and loss of life. Within this area a presence-only-based statistical method was applied to obtain susceptibility models capable of distinguish future activation sites of debris flow and debris slide, which where the main source failure mechanisms for flow or avalanche type propagation. The set of predictors used in this experiment comprised primary and secondary topographic attributes, derived by processing a high resolution digital elevation model, CORINE land cover data and a set of vegetation and mineral indices obtained by processing multispectral ASTER images. All the selected data sources are dated before the disaster. A spatially random partition technique was adopted for validation, generating fifty replicates for each of the two considered movement typologies in order to assess accuracy, precision and reliability of the models. The debris slide and debris flow susceptibility models produced high performances with the first type being the best fitted. The evaluation of the probability estimates around the mean value for each mapped pixel shows an inverted relation, with the most robust models corresponding to the debris flows. With respect to the role of each predictor within the modelling phase, debris flows appeared to be primarily controlled by topographic attributes whilst the debris slides were better explained by remotely sensed derived indices, particularly by the occurrence of previous wildfires across the slope. The overall excellent performances of the two models suggest promising perspectives for

  13. NACHOS: a finite element computer program for incompressible flow problems. Part I. Theoretical background

    International Nuclear Information System (INIS)

    Gartling, D.K.

    1978-04-01

    The theoretical background for the finite element computer program, NACHOS, is presented in detail. The NACHOS code is designed for the two-dimensional analysis of viscous incompressible fluid flows, including the effects of heat transfer. A general description of the fluid/thermal boundary value problems treated by the program is described. The finite element method and the associated numerical methods used in the NACHOS code are also presented. Instructions for use of the program are documented in SAND77-1334

  14. Genetic Algorithm for Solving Location Problem in a Supply Chain Network with Inbound and Outbound Product Flows

    Directory of Open Access Journals (Sweden)

    Suprayogi Suprayogi

    2016-12-01

    Full Text Available This paper considers a location problem in a supply chain network. The problem addressed in this paper is motivated by an initiative to develop an efficient supply chain network for supporting the agricultural activities. The supply chain network consists of regions, warehouses, distribution centers, plants, and markets. The products include a set of inbound products and a set of outbound products. In this paper, definitions of the inbound and outbound products are seen from the region’s point of view.  The inbound product is the product demanded by regions and produced by plants which flows on a sequence of the following entities: plants, distribution centers, warehouses, and regions. The outbound product is the product demanded by markets and produced by regions and it flows on a sequence of the following entities: regions, warehouses, and markets. The problem deals with determining locations of the warehouses and the distribution centers to be opened and shipment quantities associated with all links on the network that minimizes the total cost. The problem can be considered as a strategic supply chain network problem. A solution approach based on genetic algorithm (GA is proposed. The proposed GA is examined using hypothetical instances and its results are compared to the solution obtained by solving the mixed integer linear programming (MILP model. The comparison shows that there is a small gap (0.23%, on average between the proposed GA and MILP model in terms of the total cost. The proposed GA consistently provides solutions with least total cost. In terms of total cost, based on the experiment, it is demonstrated that coefficients of variation are closed to 0.

  15. Spike Code Flow in Cultured Neuronal Networks.

    Science.gov (United States)

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime; Kamimura, Takuya; Yagi, Yasushi; Mizuno-Matsumoto, Yuko; Chen, Yen-Wei

    2016-01-01

    We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of "1101" and "1011," which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the "maximum cross-correlations" among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  16. Applied multiphase flow in pipes and flow assurance oil and gas production

    CERN Document Server

    Al-Safran, Eissa M

    2017-01-01

    Applied Multiphase Flow in Pipes and Flow Assurance - Oil and Gas Production delivers the most recent advancements in multiphase flow technology while remaining easy to read and appropriate for undergraduate and graduate petroleum engineering students. Responding to the need for a more up-to-the-minute resource, this highly anticipated new book represents applications on the fundamentals with new material on heat transfer in production systems, flow assurance, transient multiphase flow in pipes and the TUFFP unified model. The complex computation procedure of mechanistic models is simplified through solution flowcharts and several example problems. Containing over 50 solved example problems and 140 homework problems, this new book will equip engineers with the skills necessary to use the latest steady-state simulators available.

  17. Maximum Entropy and Probability Kinematics Constrained by Conditionals

    Directory of Open Access Journals (Sweden)

    Stefan Lukits

    2015-03-01

    Full Text Available Two open questions of inductive reasoning are solved: (1 does the principle of maximum entropy (PME give a solution to the obverse Majerník problem; and (2 isWagner correct when he claims that Jeffrey’s updating principle (JUP contradicts PME? Majerník shows that PME provides unique and plausible marginal probabilities, given conditional probabilities. The obverse problem posed here is whether PME also provides such conditional probabilities, given certain marginal probabilities. The theorem developed to solve the obverse Majerník problem demonstrates that in the special case introduced by Wagner PME does not contradict JUP, but elegantly generalizes it and offers a more integrated approach to probability updating.

  18. Buoyancy-driven flow excursions in fuel assemblies

    International Nuclear Information System (INIS)

    Laurinat, J.E.; Paul, P.K.; Menna, J.D.

    1995-01-01

    A power limit criterion was developed for a postulated Loss of Pumping Accident (LOPA) in one of the recently shut down heavy water production reactors at the Savannah River Site. These reactors were cooled by recirculating heavy water moderator downward through channels in cylindrical fuel tubes. Powers were limited to safeguard against a flow excursion in one of more of these parallel channels. During-full-power operation, limits safeguarded against a boiling flow excursion. At low flow rates, during the addition of emergency cooling water, buoyant forces reverse the flow in one of the coolant channels before boiling occurs. As power increased beyond the point of flow reversal, the maximum wall temperature approaches the fluid saturation temperature, and a thermal excursion occurs. The power limit criterion for low flow rates was the onset of flow reversal. To determine conditions for flow reversal, tests were performed in a mock-up of a fuel assembly that contained two electrically heated concentric tubes surrounded by three flow channels. These tests were modeled using a finite difference thermal-hydraulic code. According to code calculations, flow reversed in the outer flow channel before the maximum wall temperature reached the local fluid saturation temperature. Thermal excursions occurred when the maximum wall temperature approximately equaled the saturation temperature. For a postulated LOPA, the flow reversal criterion for emergency cooling water addition was more limiting than the boiling excursion criterion for full power operation. This criterion limited powers to 37% of the limiting power for previous long-term reactor operations

  19. Buoyancy-driven flow excursions in fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Laurinat, J.E.; Paul, P.K.; Menna, J.D. [Westinghouse Savannah River Company, Aiken, SC (United States)

    1995-09-01

    A power limit criterion was developed for a postulated Loss of Pumping Accident (LOPA) in one of the recently shut down heavy water production reactors at the Savannah River Site. These reactors were cooled by recirculating heavy water moderator downward through channels in cylindrical fuel tubes. Powers were limited to safeguard against a flow excursion in one of more of these parallel channels. During-full-power operation, limits safeguarded against a boiling flow excursion. At low flow rates, during the addition of emergency cooling water, buoyant forces reverse the flow in one of the coolant channels before boiling occurs. As power increased beyond the point of flow reversal, the maximum wall temperature approaches the fluid saturation temperature, and a thermal excursion occurs. The power limit criterion for low flow rates was the onset of flow reversal. To determine conditions for flow reversal, tests were performed in a mock-up of a fuel assembly that contained two electrically heated concentric tubes surrounded by three flow channels. These tests were modeled using a finite difference thermal-hydraulic code. According to code calculations, flow reversed in the outer flow channel before the maximum wall temperature reached the local fluid saturation temperature. Thermal excursions occurred when the maximum wall temperature approximately equaled the saturation temperature. For a postulated LOPA, the flow reversal criterion for emergency cooling water addition was more limiting than the boiling excursion criterion for full power operation. This criterion limited powers to 37% of the limiting power for previous long-term reactor operations.

  20. On the solution of the differential equation occurring in the problem of heat convection in laminar flow through a tube with slip—flow

    Directory of Open Access Journals (Sweden)

    Xanming Wang

    1996-01-01

    Full Text Available A technique is developed for evaluation of eigenvalues in solution of the differential equation d2y/dr2+(1/rdy/dr+λ2(β−r2y=0 which occurs in the problem of heat convection in laminar flow through a circular tube with silp-flow (β>1. A series solution requires the expansions of coeffecients involving extremely large numbers. No work has been reported in the case of β>1, because of its computational complexity in the evaluation of the eigenvalues. In this paper, a matrix was constructed and a computational algorithm was obtained to calculate the first four eigenvalues. Also, an asymptotic formula was developed to generate the full spectrum of eigenvalues. The computational results for various values of β were obtained.

  1. A new cut-based algorithm for the multi-state flow network reliability problem

    International Nuclear Information System (INIS)

    Yeh, Wei-Chang; Bae, Changseok; Huang, Chia-Ling

    2015-01-01

    Many real-world systems can be modeled as multi-state network systems in which reliability can be derived in terms of the lower bound points of level d, called d-minimal cuts (d-MCs). This study proposes a new method to find and verify obtained d-MCs with simple and useful found properties for the multi-state flow network reliability problem. The proposed algorithm runs in O(mσp) time, which represents a significant improvement over the previous O(mp 2 σ) time bound based on max-flow/min-cut, where p, σ and m denote the number of MCs, d-MC candidates and edges, respectively. The proposed algorithm also conquers the weakness of some existing methods, which failed to remove duplicate d-MCs in special cases. A step-by-step example is given to demonstrate how the proposed algorithm locates and verifies all d-MC candidates. As evidence of the utility of the proposed approach, we present extensive computational results on 20 benchmark networks in another example. The computational results compare favorably with a previously developed algorithm in the literature. - Highlights: • A new method is proposed to find all d-MCs for the multi-state flow networks. • The proposed method can prevent the generation of d-MC duplicates. • The proposed method is simpler and more efficient than the best-known algorithms

  2. Bayesian interpretation of Generalized empirical likelihood by maximum entropy

    OpenAIRE

    Rochet , Paul

    2011-01-01

    We study a parametric estimation problem related to moment condition models. As an alternative to the generalized empirical likelihood (GEL) and the generalized method of moments (GMM), a Bayesian approach to the problem can be adopted, extending the MEM procedure to parametric moment conditions. We show in particular that a large number of GEL estimators can be interpreted as a maximum entropy solution. Moreover, we provide a more general field of applications by proving the method to be rob...

  3. A Locally Conservative Eulerian--Lagrangian Method for a Model Two-Phase Flow Problem in a One-Dimensional Porous Medium

    KAUST Repository

    Arbogast, Todd

    2012-01-01

    Motivated by possible generalizations to more complex multiphase multicomponent systems in higher dimensions, we develop an Eulerian-Lagrangian numerical approximation for a system of two conservation laws in one space dimension modeling a simplified two-phase flow problem in a porous medium. The method is based on following tracelines, so it is stable independent of any CFL constraint. The main difficulty is that it is not possible to follow individual tracelines independently. We approximate tracing along the tracelines by using local mass conservation principles and self-consistency. The two-phase flow problem is governed by a system of equations representing mass conservation of each phase, so there are two local mass conservation principles. Our numerical method respects both of these conservation principles over the computational mesh (i.e., locally), and so is a fully conservative traceline method. We present numerical results that demonstrate the ability of the method to handle problems with shocks and rarefactions, and to do so with very coarse spatial grids and time steps larger than the CFL limit. © 2012 Society for Industrial and Applied Mathematics.

  4. Investigation of problems of closing of geophysical cracks in thermoelastic media in the case of flow of fluids with impurities

    Science.gov (United States)

    Martirosyan, A. N.; Davtyan, A. V.; Dinunts, A. S.; Martirosyan, H. A.

    2018-04-01

    The purpose of this article is to investigate a problem of closing cracks by building up a layer of sediments on surfaces of a crack in an infinite thermoelastic medium in the presence of a flow of fluids with impurities. The statement of the problem of closing geophysical cracks in the presence of a fluid flow is presented with regard to the thermoelastic stress and the influence of the impurity deposition in the liquid on the crack surfaces due to thermal diffusion at the fracture closure. The Wiener–Hopf method yields an analytical solution in the special case without friction. Numerical calculations are performed in this case and the dependence of the crack closure time on the coordinate is plotted. A similar spatial problem is also solved. These results generalize the results of previous studies of geophysical cracks and debris in rocks, where the closure of a crack due to temperature effects is studied without taking the elastic stresses into account.

  5. MONOTONIC DERIVATIVE CORRECTION FOR CALCULATION OF SUPERSONIC FLOWS WITH SHOCK WAVES

    Directory of Open Access Journals (Sweden)

    P. V. Bulat

    2015-07-01

    Full Text Available Subject of Research. Numerical solution methods of gas dynamics problems based on exact and approximate solution of Riemann problem are considered. We have developed an approach to the solution of Euler equations describing flows of inviscid compressible gas based on finite volume method and finite difference schemes of various order of accuracy. Godunov scheme, Kolgan scheme, Roe scheme, Harten scheme and Chakravarthy-Osher scheme are used in calculations (order of accuracy of finite difference schemes varies from 1st to 3rd. Comparison of accuracy and efficiency of various finite difference schemes is demonstrated on the calculation example of inviscid compressible gas flow in Laval nozzle in the case of continuous acceleration of flow in the nozzle and in the case of nozzle shock wave presence. Conclusions about accuracy of various finite difference schemes and time required for calculations are made. Main Results. Comparative analysis of difference schemes for Euler equations integration has been carried out. These schemes are based on accurate and approximate solution for the problem of an arbitrary discontinuity breakdown. Calculation results show that monotonic derivative correction provides numerical solution uniformity in the breakdown neighbourhood. From the one hand, it prevents formation of new points of extremum, providing the monotonicity property, but from the other hand, causes smoothing of existing minimums and maximums and accuracy loss. Practical Relevance. Developed numerical calculation method gives the possibility to perform high accuracy calculations of flows with strong non-stationary shock and detonation waves. At the same time, there are no non-physical solution oscillations on the shock wave front.

  6. Modeling the Hybrid Flow Shop Scheduling Problem Followed by an Assembly Stage Considering Aging Effects and Preventive Maintenance Activities

    Directory of Open Access Journals (Sweden)

    Seyyed Mohammad Hassan Hosseini

    2016-05-01

    Full Text Available Scheduling problem for the hybrid flow shop scheduling problem (HFSP followed by an assembly stage considering aging effects additional preventive and maintenance activities is studied in this paper. In this production system, a number of products of different kinds are produced. Each product is assembled with a set of several parts. The first stage is a hybrid flow shop to produce parts. All machines can process all kinds of parts in this stage but each machine can process only one part at the same time. The second stage is a single assembly machine or a single assembly team of workers. The aim is to schedule the parts on the machines and assembly sequence and also determine when the preventive maintenance activities get done in order to minimize the completion time of all products (makespan. A mathematical modeling is presented and its validation is shown by solving an example in small scale. Since this problem has been proved strongly NP-hard, in order to solve the problem in medium and large scale, four heuristic algorithms is proposed based on the Johnson’s algorithm. The numerical experiments are used to run the mathematical model and evaluate the performance of the proposed algorithms.

  7. The use of Trefftz functions for approximation of measurement data in an inverse problem of flow boiling in a minichannel

    Directory of Open Access Journals (Sweden)

    Hozejowski Leszek

    2012-04-01

    Full Text Available The paper is devoted to a computational problem of predicting a local heat transfer coefficient from experimental temperature data. The experimental part refers to boiling flow of a refrigerant in a minichannel. Heat is dissipated from heating alloy to the flowing liquid due to forced convection. The mathematical model of the problem consists of the governing Poisson equation and the proper boundary conditions. For accurate results it is required to smooth the measurements which was obtained by using Trefftz functions. The measurements were approximated with a linear combination of Trefftz functions. Due to the computational procedure in which the measurement errors are known, it was possible to smooth the data and also to reduce the residuals of approximation on the boundaries.

  8. Coronary ligation reduces maximum sustained swimming speed in Chinook salmon, Oncorhynchus tshawytscha

    DEFF Research Database (Denmark)

    Farrell, A P; Steffensen, J F

    1987-01-01

    The maximum aerobic swimming speed of Chinook salmon (Oncorhynchus tshawytscha) was measured before and after ligation of the coronary artery. Coronary artery ligation prevented blood flow to the compact layer of the ventricular myocardium, which represents 30% of the ventricular mass, and produced...

  9. Numerical flow analysis of axial flow compressor for steady and unsteady flow cases

    Science.gov (United States)

    Prabhudev, B. M.; Satish kumar, S.; Rajanna, D.

    2017-07-01

    Performance of jet engine is dependent on the performance of compressor. This paper gives numerical study of performance characteristics for axial compressor. The test rig is present at CSIR LAB Bangalore. Flow domains are meshed and fluid dynamic equations are solved using ANSYS package. Analysis is done for six different speeds and for operating conditions like choke, maximum efficiency & before stall point. Different plots are compared and results are discussed. Shock displacement, vortex flows, leakage patterns are presented along with unsteady FFT plot and time step plot.

  10. Low flow characteristics of river Notwane at Gaborone Dam ...

    African Journals Online (AJOL)

    ... dam has been undertaken using daily flow records between 1979 and 1999 to determine the magnitude of annual maximum deficit volumes and deficit durations at a threshold level equivalent to 75 % dependable flow. Statistical modeling of these annual maximum values, separately, using a PWM/L-moment procedure, ...

  11. Numerical simulation of flow field in the China advanced research reactor flow-guide tank

    International Nuclear Information System (INIS)

    Xu Changjiang

    2002-01-01

    The flow-guide tank in China advanced research reactor (CARR) acts as a reactor inlet coolant distributor and play an important role in reducing the flow-induced vibration of the internal components of the reactor core. Numerical simulations of the flow field in the flow-guide tank under different conceptual designing configurations are carried out using the PHOENICS3.2. It is seen that the inlet coolant is well distributed circumferentially into the flow-guide tank with the inlet buffer plate and the flow distributor barrel. The maximum cross-flow velocity within the flow-guide tank is reduced significantly, and the reduction of flow-induced vibration of reactor internals is expected

  12. Solving groundwater flow problems by conjugate-gradient methods and the strongly implicit procedure

    Science.gov (United States)

    Hill, Mary C.

    1990-01-01

    The performance of the preconditioned conjugate-gradient method with three preconditioners is compared with the strongly implicit procedure (SIP) using a scalar computer. The preconditioners considered are the incomplete Cholesky (ICCG) and the modified incomplete Cholesky (MICCG), which require the same computer storage as SIP as programmed for a problem with a symmetric matrix, and a polynomial preconditioner (POLCG), which requires less computer storage than SIP. Although POLCG is usually used on vector computers, it is included here because of its small storage requirements. In this paper, published comparisons of the solvers are evaluated, all four solvers are compared for the first time, and new test cases are presented to provide a more complete basis by which the solvers can be judged for typical groundwater flow problems. Based on nine test cases, the following conclusions are reached: (1) SIP is actually as efficient as ICCG for some of the published, linear, two-dimensional test cases that were reportedly solved much more efficiently by ICCG; (2) SIP is more efficient than other published comparisons would indicate when common convergence criteria are used; and (3) for problems that are three-dimensional, nonlinear, or both, and for which common convergence criteria are used, SIP is often more efficient than ICCG, and is sometimes more efficient than MICCG.

  13. The Guderley problem revisited

    International Nuclear Information System (INIS)

    Ramsey, Scott D.; Kamm, James R.; Bolstad, John H.

    2009-01-01

    The self-similar converging-diverging shock wave problem introduced by Guderley in 1942 has been the source of numerous investigations since its publication. In this paper, we review the simplifications and group invariance properties that lead to a self-similar formulation of this problem from the compressible flow equations for a polytropic gas. The complete solution to the self-similar problem reduces to two coupled nonlinear eigenvalue problems: the eigenvalue of the first is the so-called similarity exponent for the converging flow, and that of the second is a trajectory multiplier for the diverging regime. We provide a clear exposition concerning the reflected shock configuration. Additionally, we introduce a new approximation for the similarity exponent, which we compare with other estimates and numerically computed values. Lastly, we use the Guderley problem as the basis of a quantitative verification analysis of a cell-centered, finite volume, Eulerian compressible flow algorithm.

  14. Maximum Likelihood Joint Tracking and Association in Strong Clutter

    Directory of Open Access Journals (Sweden)

    Leonid I. Perlovsky

    2013-01-01

    Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non-combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague-to-crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly-cluttered scenarios and results in an orders-of-magnitude improvement in signal-to-clutter ratio.

  15. The time constrained multi-commodity network flow problem and its application to liner shipping network design

    DEFF Research Database (Denmark)

    Karsten, Christian Vad; Pisinger, David; Røpke, Stefan

    2015-01-01

    -commodity network flow problem with transit time constraints which puts limits on the duration of the transit of the commodities through the network. It is shown that for the particular application it does not increase the solution time to include the transit time constraints and that including the transit time...... is essential to offer customers a competitive product. © 2015 Elsevier Ltd. All rights reserved....

  16. A dual exterior point simplex type algorithm for the minimum cost network flow problem

    Directory of Open Access Journals (Sweden)

    Geranis George

    2009-01-01

    Full Text Available A new dual simplex type algorithm for the Minimum Cost Network Flow Problem (MCNFP is presented. The proposed algorithm belongs to a special 'exterior- point simplex type' category. Similarly to the classical network dual simplex algorithm (NDSA, this algorithm starts with a dual feasible tree-solution and reduces the primal infeasibility, iteration by iteration. However, contrary to the NDSA, the new algorithm does not always maintain a dual feasible solution. Instead, the new algorithm might reach a basic point (tree-solution outside the dual feasible area (exterior point - dual infeasible tree.

  17. Potential hazard of the Neopuff T-piece resuscitator in the absence of flow limitation.

    LENUS (Irish Health Repository)

    Hawkes, C P

    2012-01-31

    OBJECTIVE: (1) To assess peak inspiratory pressure (PIP), positive end expiratory pressure (PEEP) and maximum pressure relief (P(max)) at different rates of gas flow, when the Neopuff had been set to function at 5 l\\/min. (2) To assess maximum PIP and PEEP at a flow rate of 10 l\\/min with a simulated air leak of 50%. DESIGN: 5 Neopuffs were set to a PIP of 20, PEEP of 5 and P(max) of 30 cm H(2)O at a gas flow of 5 l\\/min. PIP, PEEP and P(max) were recorded at flow rates of 10, 15 l\\/min and maximum flow. Maximum achievable pressures at 10 l\\/min gas flow, with a 50% air leak, were measured. RESULTS: At gas flow of 15 l\\/min, mean PEEP increased to 20 (95% CI 20 to 21), PIP to 28 (95% CI 28 to 29) and the P(max) to 40 cm H(2)O (95% CI 38 to 42). At maximum flow (85 l\\/min) a PEEP of 71 (95% CI 51 to 91) and PIP of 92 cm H(2)O (95% CI 69 to 115) were generated. At 10 l\\/min flow, with an air leak of 50%, the maximum PEEP and PIP were 21 (95% CI 19 to 23) and 69 cm H(2)O (95% CI 66 to 71). CONCLUSIONS: The maximum pressure relief valve is overridden by increasing the rate of gas flow and potentially harmful PIP and PEEP can be generated. Even in the presence of a 50% gas leak, more than adequate pressures can be provided at 10 l\\/min gas flow. We recommend the limitation of gas flow to a rate of 10 l\\/min as an added safety mechanism for this device.

  18. Stokes' second problem for magnetohydrodynamics flow in a Burgers' fluid: the cases γ = λ²/4 and γ>λ²/4.

    Directory of Open Access Journals (Sweden)

    Ilyas Khan

    Full Text Available The present work is concerned with exact solutions of Stokes second problem for magnetohydrodynamics (MHD flow of a Burgers' fluid. The fluid over a flat plate is assumed to be electrically conducting in the presence of a uniform magnetic field applied in outward transverse direction to the flow. The equations governing the flow are modeled and then solved using the Laplace transform technique. The expressions of velocity field and tangential stress are developed when the relaxation time satisfies the condition γ =  λ²/4 or γ> λ²/4. The obtained closed form solutions are presented in the form of simple or multiple integrals in terms of Bessel functions and terms with only Bessel functions. The numerical integration is performed and the graphical results are displayed for the involved flow parameters. It is found that the velocity decreases whereas the shear stress increases when the Hartmann number is increased. The solutions corresponding to the Stokes' first problem for hydrodynamic Burgers' fluids are obtained as limiting cases of the present solutions. Similar solutions for Stokes' second problem of hydrodynamic Burgers' fluids and those for Newtonian and Oldroyd-B fluids can also be obtained as limiting cases of these solutions.

  19. Solution of Inverse Problems using Bayesian Approach with Application to Estimation of Material Parameters in Darcy Flow

    Czech Academy of Sciences Publication Activity Database

    Domesová, Simona; Beres, Michal

    2017-01-01

    Roč. 15, č. 2 (2017), s. 258-266 ISSN 1336-1376 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : Bayesian statistics * Cross-Entropy method * Darcy flow * Gaussian random field * inverse problem Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://advances.utc.sk/index.php/AEEE/article/view/2236

  20. Hearing Problems

    Science.gov (United States)

    ... Read MoreDepression in Children and TeensRead MoreBMI Calculator Hearing ProblemsLoss in the ability to hear or discriminate ... This flow chart will help direct you if hearing loss is a problem for you or a ...

  1. Spike Code Flow in Cultured Neuronal Networks

    Directory of Open Access Journals (Sweden)

    Shinichi Tamura

    2016-01-01

    Full Text Available We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of “1101” and “1011,” which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the “maximum cross-correlations” among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  2. Maximum power point tracker for photovoltaic power plants

    Science.gov (United States)

    Arcidiacono, V.; Corsi, S.; Lambri, L.

    The paper describes two different closed-loop control criteria for the maximum power point tracking of the voltage-current characteristic of a photovoltaic generator. The two criteria are discussed and compared, inter alia, with regard to the setting-up problems that they pose. Although a detailed analysis is not embarked upon, the paper also provides some quantitative information on the energy advantages obtained by using electronic maximum power point tracking systems, as compared with the situation in which the point of operation of the photovoltaic generator is not controlled at all. Lastly, the paper presents two high-efficiency MPPT converters for experimental photovoltaic plants of the stand-alone and the grid-interconnected type.

  3. Flow conditions of fresh mortar and concrete in different pipes

    International Nuclear Information System (INIS)

    Jacobsen, Stefan; Haugan, Lars; Hammer, Tor Arne; Kalogiannidis, Evangelos

    2009-01-01

    The variation in fresh concrete flow rate over the pipe cross section was investigated on differently coloured and highly flowable concrete mixes flowing through pipes of different materials (rubber, steel, acryl). First, uncoloured (gray) concrete was poured through the pipe and the pipe blocked. Similar but coloured (black) concrete was then poured into the pipe filled with gray concrete, flowing after the gray concrete for a while before being blocked and hardened. The advance of the colouring along the pipe wall (showing boundary flow rate) was observed on the moulded concrete surface appearing after removing the pipe from the hardened concrete. The shapes of the interfaces between uncoloured and coloured concrete (showing variation of flow rate over the pipe cross section) were observed on sawn surfaces of concrete half cylinders cut along the length axes of the concrete-filled pipe. Flow profiles over the pipe cross section were clearly seen with maximum flow rates near the centre of the pipe and low flow rate at the pipe wall (typically rubber pipe with reference concrete without silica fume and/or stabilizers). More plug-shaped profiles, with long slip layers and less variation of flow rate over the cross section, were also seen (typically in smooth acrylic pipes). Flow rate, amount of concrete sticking to the wall after flow and SEM-images of pipe surface roughness were observed, illustrating the problem of testing full scale pumping.

  4. Superfast maximum-likelihood reconstruction for quantum tomography

    Science.gov (United States)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  5. Results of investigation of magnetohydrodynamic flow round the magnetosphere

    International Nuclear Information System (INIS)

    Erkaev, N.V.

    1988-01-01

    Review of the main results of the study on the Earth magnetosphere quasi-stationary magnetohydrodynamic flow-around by the solar wind is given. The principle attenuation is paid to the problem of magnetic and electric fields calculation in the transition layer and at the magnetosphere boundary. Analysis of kinematic approximation and linear diffusion model is conducted. Existence condition for the magnetic barrier region, where kinematic approximation is inapplicable, is determined. Main properties of the solution - gasokinetic pressure decrease and magnetic pressure increase up to maximum at the numerical integration results of magnetohydrodynamic equations within the magnetic barrier range. Calculation problem of reconnection field at the magnetic barrier background is considered as the next step. It is shown, that the introduction of Petchek reconnection model into the problem solution general diagram allows to obtain at the magnetosphere boundary the values of electric and magnetic fields, compatible with the experiment. Problems, linked with choice of reconnection line direction and Petchek condition generalization for the case of the crossed field reconnection, are considered

  6. MOE-Analysis for Oversaturated Flow with Interrupted Facility and Heterogeneous Traffic for Urban Roads

    Directory of Open Access Journals (Sweden)

    Hemant Kumar Sharma

    2012-09-01

    Full Text Available Speed-flow functions have been developed by several transportation experts to predict accurately the speed of urban road networks. HCM Speed-Flow Curve, BPR Curve, MTC Speed-Flow Curve, Akçelik Speed-Flow Curve are some extraordinary efforts to define the shape of speed-flow curves. However, the complexity of driver's behaviour, interactions among different type of vehicles, lateral clearance, co-relation of driver's psychology with vehicular characteristics and interdependence of various variables of traffic has led to continuous development and refinement of speed-flow curves. The problem gets more difficult in the case of urban roads with heterogeneous traffic, oversaturated flow and signalized network (which includes some unsignalized intersections as well. This paper presents analysis for various measures of effectiveness (MOE for urban roads with interrupted flow comprising heterogeneous traffic. Model has been developed for heterogeneous traffic under constraints of roadway geometry, vehicle characteristics, driving behaviour and traffic controls. The model developed in this paper predicts speed, delay, average queue and maximum queue estimates for urban roads and quantifies congestion for oversaturated conditions. The investigation details the oversaturated portion of flow in particular.

  7. A scalable variational inequality approach for flow through porous media models with pressure-dependent viscosity

    Science.gov (United States)

    Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.

    2018-04-01

    Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select

  8. Network flow model of force transmission in unbonded and bonded granular media.

    Science.gov (United States)

    Tordesillas, Antoinette; Tobin, Steven T; Cil, Mehmet; Alshibli, Khalid; Behringer, Robert P

    2015-06-01

    An established aspect of force transmission in quasistatic deformation of granular media is the existence of a dual network of strongly versus weakly loaded particles. Despite significant interest, the regulation of strong and weak forces through the contact network remains poorly understood. We examine this aspect of force transmission using data on microstructural fabric from: (I) three-dimensional discrete element models of grain agglomerates of bonded subspheres constructed from in situ synchrotron microtomography images of silica sand grains under unconfined compression and (II) two-dimensional assemblies of unbonded photoelastic circular disks submitted to biaxial compression under constant volume. We model force transmission as a network flow and solve the maximum flow-minimum cost (MFMC) problem, the solution to which yields a percolating subnetwork of contacts that transmits the "maximum flow" (i.e., the highest units of force) at "least cost" (i.e., the dissipated energy from such transmission). We find the MFMC describes a two-tier hierarchical architecture. At the local level, it encapsulates intraconnections between particles in individual force chains and in their conjoined 3-cycles, with the most common configuration having at least one force chain contact experiencing frustrated rotation. At the global level, the MFMC encapsulates interconnections between force chains. The MFMC can be used to predict most of the force chain particles without need for any information on contact forces, thereby suggesting the network flow framework may have potential broad utility in the modeling of force transmission in unbonded and bonded granular media.

  9. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  10. The maximum number of minimal codewords in long codes

    DEFF Research Database (Denmark)

    Alahmadi, A.; Aldred, R.E.L.; dela Cruz, R.

    2013-01-01

    Upper bounds on the maximum number of minimal codewords in a binary code follow from the theory of matroids. Random coding provides lower bounds. In this paper, we compare these bounds with analogous bounds for the cycle code of graphs. This problem (in the graphic case) was considered in 1981 by...

  11. Calculation of sample problems related to two-phase flow blowdown transients in pressure relief piping of a PWR pressurizer

    International Nuclear Information System (INIS)

    Shin, Y.W.; Wiedermann, A.H.

    1984-02-01

    A method was published, based on the integral method of characteristics, by which the junction and boundary conditions needed in computation of a flow in a piping network can be accurately formulated. The method for the junction and boundary conditions formulation together with the two-step Lax-Wendroff scheme are used in a computer program; the program in turn, is used here in calculating sample problems related to the blowdown transient of a two-phase flow in the piping network downstream of a PWR pressurizer. Independent, nearly exact analytical solutions also are obtained for the sample problems. Comparison of the results obtained by the hybrid numerical technique with the analytical solutions showed generally good agreement. The good numerical accuracy shown by the results of our scheme suggest that the hybrid numerical technique is suitable for both benchmark and design calculations of PWR pressurizer blowdown transients

  12. Developing Semi-Analytical solutions for Saint-Venant Equations in the Uniform Flow Region

    Directory of Open Access Journals (Sweden)

    M.M. Heidari

    2016-09-01

    Full Text Available Introduction: Unsteady flow in irrigation systems is the result of operations in response to changes in water demand that affect the hydraulic performance networks. The increased hydraulic performance needed to recognize unsteady flow and quantify the factors affecting it. Unsteady flow in open channels is governed by the fully dynamic Saint Venant equation, which express the principles of conservation of mass and momentum. Unsteady flow in open channels can be classified into two types: routing and operation-type problems. In the routing problems, The Saint Venant equations are solved to get the discharge and water level in the time series. Also, they are used in the operation problem to compute the inflow at the upstream section of the channel according to the prescribed downstream flow hydrographs. The Saint Venant equation has no analytical solution and in the majority cases of such methods use numerical integration of continuity and momentum equations, and are characterized by complicated numerical procedures that are not always convenient for carrying out practical engineering calculations. Therefore, approximate methods deserve attention since they would allow the solution of dynamic problems in analytical form with enough exactness. There are effective methods for automatic controller synthesis in control theory that provide the required performance optimization. It is therefore important to get simplified models of irrigation canals for control design. It would be even more interesting to have linear models that explicitly depend on physical parameters. Such models would allow one to, handle the dynamics of the system with fewer parameters, understand the impact of physical parameters on the dynamics, and facilitate the development a systematic design method. Many analytical models have been proposed in the literature, Most of them have been obtained in the frequency domain by applying Laplace transform to linearized Saint

  13. Maximum entropy reconstructions for crystallographic imaging; Cristallographie et reconstruction d`images par maximum d`entropie

    Energy Technology Data Exchange (ETDEWEB)

    Papoular, R

    1997-07-01

    The Fourier Transform is of central importance to Crystallography since it allows the visualization in real space of tridimensional scattering densities pertaining to physical systems from diffraction data (powder or single-crystal diffraction, using x-rays, neutrons, electrons or else). In turn, this visualization makes it possible to model and parametrize these systems, the crystal structures of which are eventually refined by Least-Squares techniques (e.g., the Rietveld method in the case of Powder Diffraction). The Maximum Entropy Method (sometimes called MEM or MaxEnt) is a general imaging technique, related to solving ill-conditioned inverse problems. It is ideally suited for tackling undetermined systems of linear questions (for which the number of variables is much larger than the number of equations). It is already being applied successfully in Astronomy, Radioastronomy and Medical Imaging. The advantages of using MAXIMUM Entropy over conventional Fourier and `difference Fourier` syntheses stem from the following facts: MaxEnt takes the experimental error bars into account; MaxEnt incorporate Prior Knowledge (e.g., the positivity of the scattering density in some instances); MaxEnt allows density reconstructions from incompletely phased data, as well as from overlapping Bragg reflections; MaxEnt substantially reduces truncation errors to which conventional experimental Fourier reconstructions are usually prone. The principles of Maximum Entropy imaging as applied to Crystallography are first presented. The method is then illustrated by a detailed example specific to Neutron Diffraction: the search for proton in solids. (author). 17 refs.

  14. Analytical solution to the circularity problem in the discounted cash flow valuation framework

    Directory of Open Access Journals (Sweden)

    Felipe Mejía-Peláez

    2011-12-01

    Full Text Available In this paper we propose an analytical solution to the circularity problem between value and cost of capital. Our solution is derived starting from a central principle of finance that relates value today to value, cash flow, and the discount rate for next period. We present a general formulation without circularity for the equity value (E, cost of levered equity (Ke, levered firm value (V, and the weighted average cost of capital (WACC. We furthermore compare the results obtained from these formulas with the results of the application of the Adjusted Present Value approach (no circularity and the iterative solution of circularity based upon the iteration feature of a spreadsheet, concluding that all methods yield exactly the same answer. The advantage of this solution is that it avoids problems such as using manual methods (i.e., the popular “Rolling WACC” ignoring the circularity issue, setting a target leverage (usually constant with the inconsistencies that result from it, the wrong use of book values, or attributing the discrepancies in values to rounding errors.

  15. Maximum run-up behavior of tsunamis under non-zero initial velocity condition

    Directory of Open Access Journals (Sweden)

    Baran AYDIN

    2018-03-01

    Full Text Available The tsunami run-up problem is solved non-linearly under the most general initial conditions, that is, for realistic initial waveforms such as N-waves, as well as standard initial waveforms such as solitary waves, in the presence of initial velocity. An initial-boundary value problem governed by the non-linear shallow-water wave equations is solved analytically utilizing the classical separation of variables technique, which proved to be not only fast but also accurate analytical approach for this type of problems. The results provide important information on maximum tsunami run-up qualitatively. We observed that, although the calculated maximum run-ups increase significantly, going as high as double that of the zero-velocity case, initial waves having non-zero fluid velocity exhibit the same run-up behavior as waves without initial velocity, for all wave types considered in this study.

  16. Transformation of Commercial Flows into Physical Flows of Electricity – Flow Based Method

    Directory of Open Access Journals (Sweden)

    M. Adamec

    2009-01-01

    Full Text Available We are witnesses of large – scale electricity transport between European countries under the umbrella of the UCTE organization. This is due to the inabilyof generators to satisfy the growing consumption in some regions. In this content, we distinguish between two types of flow. The first type is physical flow, which causes costs in the transmission grid, whilst the second type is commercial flow, which provides revenues for the market participants. The old methods for allocating transfer capacity fail to take this duality into account. The old methods that allocate transmission border capacity to “virtual” commercial flows which, in fact, will not flow over this border, do not lead to optimal allocation. Some flows are uselessly rejected and conversely, some accepted flows can cause congestion on another border. The Flow Based Allocation method (FBA is a method which aims to solve this problem.Another goal of FBA is to ensure sustainable development of expansion of transmission capacity. Transmission capacity is important, because it represents a way to establish better transmission system stability, and it provides a distribution channel for electricity to customers abroad. For optimal development, it is necessary to ensure the right division of revenue allocation among the market participants.This paper contains a brief description of the FBA method. Problems of revenue maximization and optimal revenue distribution are mentioned. 

  17. Exact solutions to traffic density estimation problems involving the Lighthill-Whitham-Richards traffic flow model using mixed integer programming

    KAUST Repository

    Canepa, Edward S.; Claudel, Christian G.

    2012-01-01

    This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. © 2012 IEEE.

  18. Exact solutions to traffic density estimation problems involving the Lighthill-Whitham-Richards traffic flow model using mixed integer programming

    KAUST Repository

    Canepa, Edward S.

    2012-09-01

    This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. © 2012 IEEE.

  19. Turbulent behaviour of non-cohesive sediment gravity flows at unexpectedly high flow density

    Science.gov (United States)

    Baker, Megan; Baas, Jaco H.; Malarkey, Jonathan; Kane, Ian

    2016-04-01

    Experimental lock exchange-type turbidity currents laden with non-cohesive silica-flour were found to be highly dynamic at remarkably high suspended sediment concentrations. These experiments were conducted to produce sediment gravity flows of volumetric concentrations ranging from 1% to 52%, to study how changes in suspended sediment concentration affects the head velocities and run-out distances of these flows, in natural seawater. Increasing the volumetric concentration of suspended silica-flour, C, up to C = 46%, within the flows led to a progressive increase in the maximum head velocity. This relationship suggests that suspended sediment concentration intensifies the density difference between the turbulent suspension and the ambient water, which drives the flow, even if almost half of the available space is occupied by sediment particles. However, from C = 46% to C = 52% a rapid reduction in the maximum head velocity was measured. It is inferred that at C = 46%, friction from grain-to-grain interactions begins to attenuate turbulence within the flows. At C > 46%, the frictional stresses become progressively more dominant over the turbulent forces and excess density, thus producing lower maximum head velocities. This grain interaction process started to rapidly reduce the run-out distance of the silica-flour flows at equally high concentrations of C ≥ 47%. All flows with C tank, but the head velocities gradually reduced along the tank. Bagnold (1954, 1963) estimated that, for sand flows, grain-to-grain interactions start to become important in modulating turbulence at C > 9%. Yet, the critical flow concentration at which turbulence modulation commenced for these silica-flour laden flows appeared to be much higher. We suggest that Bagnold's 9% criterion cannot be applied to flows that carry fine-grained sediment, because turbulent forces are more important than dispersive forces, and frictional forces start to affect the flows only at concentrations just

  20. Maximum Principle for General Controlled Systems Driven by Fractional Brownian Motions

    International Nuclear Information System (INIS)

    Han Yuecai; Hu Yaozhong; Song Jian

    2013-01-01

    We obtain a maximum principle for stochastic control problem of general controlled stochastic differential systems driven by fractional Brownian motions (of Hurst parameter H>1/2). This maximum principle specifies a system of equations that the optimal control must satisfy (necessary condition for the optimal control). This system of equations consists of a backward stochastic differential equation driven by both fractional Brownian motions and the corresponding underlying standard Brownian motions. In addition to this backward equation, the maximum principle also involves the Malliavin derivatives. Our approach is to use conditioning and Malliavin calculus. To arrive at our maximum principle we need to develop some new results of stochastic analysis of the controlled systems driven by fractional Brownian motions via fractional calculus. Our approach of conditioning and Malliavin calculus is also applied to classical system driven by standard Brownian motions while the controller has only partial information. As a straightforward consequence, the classical maximum principle is also deduced in this more natural and simpler way.

  1. Analysis of Minute Features in Speckled Imagery with Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Alejandro C. Frery

    2004-12-01

    Full Text Available This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can neither be assumed Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the 𝒢0 law. This paper deals with amplitude data, so the 𝒢A0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics of the parameters of the 𝒢A0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.

  2. High-order multi-implicit spectral deferred correction methods for problems of reactive flow

    International Nuclear Information System (INIS)

    Bourlioux, Anne; Layton, Anita T.; Minion, Michael L.

    2003-01-01

    Models for reacting flow are typically based on advection-diffusion-reaction (A-D-R) partial differential equations. Many practical cases correspond to situations where the relevant time scales associated with each of the three sub-processes can be widely different, leading to disparate time-step requirements for robust and accurate time-integration. In particular, interesting regimes in combustion correspond to systems in which diffusion and reaction are much faster processes than advection. The numerical strategy introduced in this paper is a general procedure to account for this time-scale disparity. The proposed methods are high-order multi-implicit generalizations of spectral deferred correction methods (MISDC methods), constructed for the temporal integration of A-D-R equations. Spectral deferred correction methods compute a high-order approximation to the solution of a differential equation by using a simple, low-order numerical method to solve a series of correction equations, each of which increases the order of accuracy of the approximation. The key feature of MISDC methods is their flexibility in handling several sub-processes implicitly but independently, while avoiding the splitting errors present in traditional operator-splitting methods and also allowing for different time steps for each process. The stability, accuracy, and efficiency of MISDC methods are first analyzed using a linear model problem and the results are compared to semi-implicit spectral deferred correction methods. Furthermore, numerical tests on simplified reacting flows demonstrate the expected convergence rates for MISDC methods of orders three, four, and five. The gain in efficiency by independently controlling the sub-process time steps is illustrated for nonlinear problems, where reaction and diffusion are much stiffer than advection. Although the paper focuses on this specific time-scales ordering, the generalization to any ordering combination is straightforward

  3. On an Objective Basis for the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    David J. Miller

    2015-01-01

    Full Text Available In this letter, we elaborate on some of the issues raised by a recent paper by Neapolitan and Jiang concerning the maximum entropy (ME principle and alternative principles for estimating probabilities consistent with known, measured constraint information. We argue that the ME solution for the “problematic” example introduced by Neapolitan and Jiang has stronger objective basis, rooted in results from information theory, than their alternative proposed solution. We also raise some technical concerns about the Bayesian analysis in their work, which was used to independently support their alternative to the ME solution. The letter concludes by noting some open problems involving maximum entropy statistical inference.

  4. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  5. A practical exact maximum compatibility algorithm for reconstruction of recent evolutionary history

    OpenAIRE

    Cherry, Joshua L.

    2017-01-01

    Background Maximum compatibility is a method of phylogenetic reconstruction that is seldom applied to molecular sequences. It may be ideal for certain applications, such as reconstructing phylogenies of closely-related bacteria on the basis of whole-genome sequencing. Results Here I present an algorithm that rapidly computes phylogenies according to a compatibility criterion. Although based on solutions to the maximum clique problem, this algorithm deals properly with ambiguities in the data....

  6. Inverse problems of geophysics

    International Nuclear Information System (INIS)

    Yanovskaya, T.B.

    2003-07-01

    This report gives an overview and the mathematical formulation of geophysical inverse problems. General principles of statistical estimation are explained. The maximum likelihood and least square fit methods, the Backus-Gilbert method and general approaches for solving inverse problems are discussed. General formulations of linearized inverse problems, singular value decomposition and properties of pseudo-inverse solutions are given

  7. Route optimisation and solving Zermelo’s navigation problem during long distance migration in cross flows

    DEFF Research Database (Denmark)

    Hays, Graeme C.; Christensen, Asbjørn; Fossette, Sabrina

    2014-01-01

    The optimum path to follow when subjected to cross flows was first considered over 80 years ago by the German mathematician Ernst Zermelo, in the context of a boat being displaced by ocean currents, and has become known as the ‘Zermelo navigation problem’. However, the ability of migrating animals...... to solve this problem has received limited consideration, even though wind and ocean currents cause the lateral displacement of flyers and swimmers, respectively, particularly during long-distance journeys of 1000s of kilometres. Here, we examine this problem by combining long-distance, open-ocean marine...... not follow the optimum (Zermelo's) route. Even though adult marine turtles regularly complete incredible long-distance migrations, these vertebrates primarily rely on course corrections when entering neritic waters during the final stages of migration. Our work introduces a new perspective in the analysis...

  8. Finite time exergy analysis and multi-objective ecological optimization of a regenerative Brayton cycle considering the impact of flow rate variations

    International Nuclear Information System (INIS)

    Naserian, Mohammad Mahdi; Farahat, Said; Sarhaddi, Faramarz

    2015-01-01

    Highlights: • Defining a dimensionless parameter includes the finite-time and size concepts. • Inserting the concept of exergy of fluid streams into finite-time thermodynamics. • Defining, drawing and modifying of maximum ecological function curve. • Suggesting the appropriate performance zone, according to maximum ecological curve. - Abstract: In this study, the optimal performance of a regenerative Brayton cycle is sought through power and then ecological function maximization using finite-time thermodynamic concept and finite-size components. Multi-objective optimization is used for maximizing the ecological function. Optimizations are performed using genetic algorithm. In order to take into account the finite-time and finite-size concepts in current problem, a dimensionless mass-flow parameter is introduced deploying time variations. The variations of output power, total exergy destruction of the system, and decision variables for the optimum state (maximum ecological function state) are compared to the maximum power state using the dimensionless parameter. The modified ecological function in optimum state is obtained and plotted relating to the dimensionless mass-flow parameter. One can see that the modified ecological function study results in a better performance than that obtained with the maximum power state. Finally, the appropriate performance zone of the heat engine will be obtained

  9. Perbandingan Hasil Pemodelan Aliran Satu Dimensi Unsteady Flow dan Steady Flow pada Banjir Kota

    Directory of Open Access Journals (Sweden)

    Andreas Tigor Oktaga

    2016-06-01

    Full Text Available One dimensional flow is often used as a flood simulation for the planning capacity of the river. Flood is a type of unsteady non-uniform flow, that can be simulated using HEC-RAS. HEC-RAS software is often used for flood modeling with a one-dimensional flow method. Unsteady flow modeling results in HEC-RAS sometimes refer to error and warning due to unstable analysis program. The stability program among others influenced bend in the river flow, the steep slope of the river bottom, and changes in cross-section shape. Because the flood handling required maximum discharge and maximum flood water level, then a steady flow is often used as an alternative to simulate the flood flow. This study aimed to determine the advantages and disadvantages of modeling unsteady non-uniform and steady non-uniform flow. The research location in the Kanal Banjir Barat, in the Semarang City. Hydraulics modeling uses HEC-RAS 4.1 and for discharge the plan is obtained from the HEC-HMS 3.5. Results of the comparison modeling hydraulics the modeling of steady non-uniform flow has a tendency water level is higher and modeling of unsteady non-uniform flow takes longer to analyze. Results of the comparison the average flood water level maximun is less than 15%  (± 0,3 meters, that is 0.27 meters (13.16% for Q50, 0.25 meters (11.56% for Q100, dan 0.16 meters (4.73% for Q200. So the modeling steady non-uniform flow can still be used as a companion version the modeling unsteady non-uniform flow.

  10. Application of a quadratic method of programming to a particular problem of a rational development of a waterflooded field

    Energy Technology Data Exchange (ETDEWEB)

    Korotkov, S F; Khalitov, N T

    1965-01-01

    he quadratic method of programming is used to solve the following type of problem. A circular reservoir is subjected to a peripheral waterflood. The reservoir is drained by wells arranged in 3 concentric circles. The objective is to control the operation of producing wells, that a maximum quantity of water-free oil will be produced. The wells are flowed so that bottomhole pressure is above the bubble point. A quadratic equation is used to express the essential features of the problem; a system of linear equations is used to express the boundary conditions. The problem is solved by means of the Wolf algorithm method. The method is demonstrated by an illustrative example.

  11. Probable maximum flood on the Ha Ha River

    International Nuclear Information System (INIS)

    Damov, D.; Masse, B.

    1997-01-01

    Results of a probable maximum flood (PMF) study conducted for various locations along the Ha Ha river, a tributary of the Saguenay River, were discussed. The study was undertaken for use in the design and construction of new hydraulic structures for water supply for a pulp and paper facility, following the Saguenay Flood in July 1996. Many different flood scenarios were considered, including combinations of snow-melt with rainfall. Using computer simulations, it was shown that the largest flood flows were generated by summer-fall PMF. 5 refs., 12 figs

  12. Load flow analysis using decoupled fuzzy load flow under critical ...

    African Journals Online (AJOL)

    user

    3.1 Maximum range selection of input and output variables: ..... Wong K. P., Li A., and Law M.Y., “ Advanced Constrained Genetic Algorithm Load Flow Method”, IEE Proc. ... Dr. Parimal Acharjee passed B.E.E. from North Bengal University ...

  13. Heat transfer and fluid flow in regular rod arrays with opposing flow

    International Nuclear Information System (INIS)

    Yang, J.W.

    1979-01-01

    The heat transfer and fluid flow problem of opposing flow in the fully developed laminar region has been solved analytically for regular rod arrays. The problem is governed by two parameters: the pitch-to-diameter ratio and the Grashof-to-Reynolds number ratio. The critical Gr/Re ratios for flow separation caused by the upward buoyancy force on the downward flow were evaluated for a large range of P/D ratios of the triangular array. Numerical results reveal that both the heat transfer and pressure loss are reduced by the buoyancy force. Applications to nuclear reactors are discussed

  14. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    Science.gov (United States)

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  15. Generalized network improvement and packing problems

    CERN Document Server

    Holzhauser, Michael

    2016-01-01

    Michael Holzhauser discusses generalizations of well-known network flow and packing problems by additional or modified side constraints. By exploiting the inherent connection between the two problem classes, the author investigates the complexity and approximability of several novel network flow and packing problems and presents combinatorial solution and approximation algorithms. Contents Fractional Packing and Parametric Search Frameworks Budget-Constrained Minimum Cost Flows: The Continuous Case Budget-Constrained Minimum Cost Flows: The Discrete Case Generalized Processing Networks Convex Generalized Flows Target Groups Researchers and students in the fields of mathematics, computer science, and economics Practitioners in operations research and logistics The Author Dr. Michael Holzhauser studied computer science at the University of Kaiserslautern and is now a research fellow in the Optimization Research Group at the Department of Mathematics of the University of Kaiserslautern.

  16. Adjustable focus laser sheet module for generating constant maximum width sheets for use in optical flow diagnostics

    International Nuclear Information System (INIS)

    Hult, J; Mayer, S

    2011-01-01

    A general design of a laser light sheet module with adjustable focus is presented, where the maximum sheet width is preserved over a fixed region. In contrast, conventional focusing designs are associated with a variation in maximum sheet width with focal position. A four lens design is proposed here, where the first three lenses are employed for focusing, and the last for sheet expansion. A maximum sheet width of 1100 µm was maintained over a 50 mm long distance, for focal distances ranging from 75 to 500 mm, when a 532 nm laser beam with a beam quality factor M 2 = 29 was used for illumination

  17. Hydromagnetic natural convection flow between vertical parallel plates with time-periodic boundary conditions

    International Nuclear Information System (INIS)

    Adesanya, S.O.; Oluwadare, E.O.; Falade, J.A.; Makinde, O.D.

    2015-01-01

    In this paper, the free convective flow of magnetohydrodynamic fluid through a channel with time periodic boundary condition is investigated by taking the effects of Joule dissipation into consideration. Based on simplifying assumptions, the coupled governing equations are reduced to a set of nonlinear boundary valued problem. Approximate solutions are obtained by using semi-analytical Adomian decomposition method. The effect of pertinent parameters on the fluid velocity, temperature distribution, Nusselt number and skin friction are presented graphically and discussed. The result of the computation shows that an increase in the magnetic field intensity has significant influence on the fluid flow. - Highlights: • The influence of magnetic field on the free convective fluid flow is considered. • The coupled equations are solved by using Adomian decomposition method. • The Adomian series solution agreed with previously obtained result. • Magnetic field decreases the velocity maximum but enhances temperature field

  18. RAPVOID, H2O Flow and Steam Flow in Pipe System with Phase Equilibrium

    International Nuclear Information System (INIS)

    Porter, W.H.L.

    1980-01-01

    enthalpy. If the choked mass flow is lower, then the code iterates to obtain the converged mass flow at the most upstream critical choke position that it discovers. Having converged into this solution, it then examines the conditions downstream of this choke point for the derived mass flow. The method of determining the critical flow for a given total pressure and enthalpy is to discover the static pressure which gives the maximum flow (AEEW M1364). 3 - Restrictions on the complexity of the problem: A practical difficulty arises in running the code if the pipework is evenly divided into mesh lengths as the static pressure gradient tends to infinity as critical conditions are approached. The practical method of overcoming this difficulty is to discover the incremental length of pipe responsible for a selected pressure change and progressively deducting these incremental lengths until the other end of the pipe is reached

  19. MPSalsa a finite element computer program for reacting flow problems. Part 2 - user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Salinger, A.; Devine, K.; Hennigan, G.; Moffat, H. [and others

    1996-09-01

    This manual describes the use of MPSalsa, an unstructured finite element (FE) code for solving chemically reacting flow problems on massively parallel computers. MPSalsa has been written to enable the rigorous modeling of the complex geometry and physics found in engineering systems that exhibit coupled fluid flow, heat transfer, mass transfer, and detailed reactions. In addition, considerable effort has been made to ensure that the code makes efficient use of the computational resources of massively parallel (MP), distributed memory architectures in a way that is nearly transparent to the user. The result is the ability to simultaneously model both three-dimensional geometries and flow as well as detailed reaction chemistry in a timely manner on MT computers, an ability we believe to be unique. MPSalsa has been designed to allow the experienced researcher considerable flexibility in modeling a system. Any combination of the momentum equations, energy balance, and an arbitrary number of species mass balances can be solved. The physical and transport properties can be specified as constants, as functions, or taken from the Chemkin library and associated database. Any of the standard set of boundary conditions and source terms can be adapted by writing user functions, for which templates and examples exist.

  20. Flow through a very porous obstacle in a shallow channel.

    Science.gov (United States)

    Creed, M J; Draper, S; Nishino, T; Borthwick, A G L

    2017-04-01

    A theoretical model, informed by numerical simulations based on the shallow water equations, is developed to predict the flow passing through and around a uniform porous obstacle in a shallow channel, where background friction is important. This problem is relevant to a number of practical situations, including flow through aquatic vegetation, the performance of arrays of turbines in tidal channels and hydrodynamic forces on offshore structures. To demonstrate this relevance, the theoretical model is used to (i) reinterpret core flow velocities in existing laboratory-based data for an array of emergent cylinders in shallow water emulating aquatic vegetation and (ii) reassess the optimum arrangement of tidal turbines to generate power in a tidal channel. Comparison with laboratory-based data indicates a maximum obstacle resistance (or minimum porosity) for which the present theoretical model is valid. When the obstacle resistance is above this threshold the shallow water equations do not provide an adequate representation of the flow, and the theoretical model over-predicts the core flow passing through the obstacle. The second application of the model confirms that natural bed resistance increases the power extraction potential for a partial tidal fence in a shallow channel and alters the optimum arrangement of turbines within the fence.

  1. Study on solid-liquid two-phase unsteady flow characteristics with different flow rates in screw centrifugal pump

    International Nuclear Information System (INIS)

    Li, R N; Wang, H Y; Han, W; Shen, Z J; Ma, W

    2013-01-01

    The screw centrifugal pump is used as an object, and the unsteady numerical simulation of solid-liquid two-phase flow is carried out under different flow rate conditions in one circle by choosing the two-phase flow of sand and water as medium, using the software FLUENT based on the URANS equations, combining with sliding mesh method, and choosing the Mixture multiphase flow model and the SIMPLE algorithm. The results show that, with the flow rate increasing, the change trends for the pressure on volute outlet are almost constant, the fluctuation trends of the impeller axial force have a little change, the pressure and the axial force turn to decrease on the whole, the radial force gradually increases when the impeller maximum radius passes by half a cycle near the volute outlet, and the radial force gradually decreases when the maximum radius passes by the other half a cycle in a rotation cycle. The distributions of the solid particles are very uneven under a small flow rate condition on the face. The solid particles under a big flow rate condition are distributed more evenly than the ones under a small flow rate condition on the back. The theoretical basis and reference are provided for improving its working performance

  2. Flow characteristics of centrifugal gas-liquid separator. Investigation with air-water two-phase flow experiment

    International Nuclear Information System (INIS)

    Yoneda, Kimitoshi; Inada, Fumio

    2004-01-01

    Air-water two-phase flow experiment was conducted to examine the basic flow characteristics of a centrifugal gas-liquid separator. Vertical transparent test section, which is 4 m in height, was used to imitate the scale of a BWR separator. Flow rate conditions of gas and liquid were fixed at 0.1 m 3 /s and 0.033 m 3 /s, respectively. Radial distributions of two-phase flow characteristics, such as void fraction, gas velocity and bubble chord length, were measured by traversing dual optical void probes in the test section, horizontally. The flow in the standpipe reached to quasi-developed state within the height-to-diameter aspect ratio H/D=10, which in turn can mean the maximum value for an ideal height design of a standpipe. The liquid film in the barrel showed a maximum thickness at 0.5 to 1 m in height from the swirler exit, which was a common result for three different standpipe length conditions, qualitatively and quantitatively. The empirical database obtained in this study would contribute practically to the validation of numerical analyses for an actual separator in a plant, and would also be academically useful for further investigations of two-phase flow in large-diameter pipes. (author)

  3. Modeling on bubbly to churn flow pattern transition in narrow rectangular channel

    International Nuclear Information System (INIS)

    Wang Yanlin; Chen Bingde; Huang Yanping; Wang Junfeng

    2012-01-01

    A theoretical model based on some reasonable concepts was developed to predict the bubbly flow to churn flow pattern transition in vertical narrow rectangular channel under flow boiling condition. The maximum size of ideal bubble in narrow rectangular channel was calculated based on previous literature. The thermal hydraulics boundary condition of bubbly to churn flow pattern transition was exported from Helmholtz and maximum size of ideal bubble. The theoretical model was validated by existent experimental data. (authors)

  4. Active-Set Reduced-Space Methods with Nonlinear Elimination for Two-Phase Flow Problems in Porous Media

    KAUST Repository

    Yang, Haijian

    2016-07-26

    Fully implicit methods are drawing more attention in scientific and engineering applications due to the allowance of large time steps in extreme-scale simulations. When using a fully implicit method to solve two-phase flow problems in porous media, one major challenge is the solution of the resultant nonlinear system at each time step. To solve such nonlinear systems, traditional nonlinear iterative methods, such as the class of the Newton methods, often fail to achieve the desired convergent rate due to the high nonlinearity of the system and/or the violation of the boundedness requirement of the saturation. In the paper, we reformulate the two-phase model as a variational inequality that naturally ensures the physical feasibility of the saturation variable. The variational inequality is then solved by an active-set reduced-space method with a nonlinear elimination preconditioner to remove the high nonlinear components that often causes the failure of the nonlinear iteration for convergence. To validate the effectiveness of the proposed method, we compare it with the classical implicit pressure-explicit saturation method for two-phase flow problems with strong heterogeneity. The numerical results show that our nonlinear solver overcomes the often severe limits on the time step associated with existing methods, results in superior convergence performance, and achieves reduction in the total computing time by more than one order of magnitude.

  5. Active-Set Reduced-Space Methods with Nonlinear Elimination for Two-Phase Flow Problems in Porous Media

    KAUST Repository

    Yang, Haijian; Yang, Chao; Sun, Shuyu

    2016-01-01

    Fully implicit methods are drawing more attention in scientific and engineering applications due to the allowance of large time steps in extreme-scale simulations. When using a fully implicit method to solve two-phase flow problems in porous media, one major challenge is the solution of the resultant nonlinear system at each time step. To solve such nonlinear systems, traditional nonlinear iterative methods, such as the class of the Newton methods, often fail to achieve the desired convergent rate due to the high nonlinearity of the system and/or the violation of the boundedness requirement of the saturation. In the paper, we reformulate the two-phase model as a variational inequality that naturally ensures the physical feasibility of the saturation variable. The variational inequality is then solved by an active-set reduced-space method with a nonlinear elimination preconditioner to remove the high nonlinear components that often causes the failure of the nonlinear iteration for convergence. To validate the effectiveness of the proposed method, we compare it with the classical implicit pressure-explicit saturation method for two-phase flow problems with strong heterogeneity. The numerical results show that our nonlinear solver overcomes the often severe limits on the time step associated with existing methods, results in superior convergence performance, and achieves reduction in the total computing time by more than one order of magnitude.

  6. Maximum Runoff of the Flood on Wadis of Northern Part of Algeria ...

    African Journals Online (AJOL)

    Wadis of Algeria are characterized by a very irregular hydrological regime. The question of estimating the maximum flow of wadis is relevant. We propose in this paper a method based on an interpretation of the transformation of surface runoff in streamflow. The technique of account the maximal runoff of flood for the rivers ...

  7. Maximum-power-point tracking control of solar heating system

    KAUST Repository

    Huang, Bin-Juine

    2012-11-01

    The present study developed a maximum-power point tracking control (MPPT) technology for solar heating system to minimize the pumping power consumption at an optimal heat collection. The net solar energy gain Q net (=Q s-W p/η e) was experimentally found to be the cost function for MPPT with maximum point. The feedback tracking control system was developed to track the optimal Q net (denoted Q max). A tracking filter which was derived from the thermal analytical model of the solar heating system was used to determine the instantaneous tracking target Q max(t). The system transfer-function model of solar heating system was also derived experimentally using a step response test and used in the design of tracking feedback control system. The PI controller was designed for a tracking target Q max(t) with a quadratic time function. The MPPT control system was implemented using a microprocessor-based controller and the test results show good tracking performance with small tracking errors. It is seen that the average mass flow rate for the specific test periods in five different days is between 18.1 and 22.9kg/min with average pumping power between 77 and 140W, which is greatly reduced as compared to the standard flow rate at 31kg/min and pumping power 450W which is based on the flow rate 0.02kg/sm 2 defined in the ANSI/ASHRAE 93-1986 Standard and the total collector area 25.9m 2. The average net solar heat collected Q net is between 8.62 and 14.1kW depending on weather condition. The MPPT control of solar heating system has been verified to be able to minimize the pumping energy consumption with optimal solar heat collection. © 2012 Elsevier Ltd.

  8. Benchmark problems for repository siting models

    International Nuclear Information System (INIS)

    Ross, B.; Mercer, J.W.; Thomas, S.D.; Lester, B.H.

    1982-12-01

    This report describes benchmark problems to test computer codes used in siting nuclear waste repositories. Analytical solutions, field problems, and hypothetical problems are included. Problems are included for the following types of codes: ground-water flow in saturated porous media, heat transport in saturated media, ground-water flow in saturated fractured media, heat and solute transport in saturated porous media, solute transport in saturated porous media, solute transport in saturated fractured media, and solute transport in unsaturated porous media

  9. Some free boundary problems in potential flow regime usinga based level set method

    Energy Technology Data Exchange (ETDEWEB)

    Garzon, M.; Bobillo-Ares, N.; Sethian, J.A.

    2008-12-09

    Recent advances in the field of fluid mechanics with moving fronts are linked to the use of Level Set Methods, a versatile mathematical technique to follow free boundaries which undergo topological changes. A challenging class of problems in this context are those related to the solution of a partial differential equation posed on a moving domain, in which the boundary condition for the PDE solver has to be obtained from a partial differential equation defined on the front. This is the case of potential flow models with moving boundaries. Moreover the fluid front will possibly be carrying some material substance which will diffuse in the front and be advected by the front velocity, as for example the use of surfactants to lower surface tension. We present a Level Set based methodology to embed this partial differential equations defined on the front in a complete Eulerian framework, fully avoiding the tracking of fluid particles and its known limitations. To show the advantages of this approach in the field of Fluid Mechanics we present in this work one particular application: the numerical approximation of a potential flow model to simulate the evolution and breaking of a solitary wave propagating over a slopping bottom and compare the level set based algorithm with previous front tracking models.

  10. Simulation of Thermal Flow Problems via a Hybrid Immersed Boundary-Lattice Boltzmann Method

    Directory of Open Access Journals (Sweden)

    J. Wu

    2012-01-01

    Full Text Available A hybrid immersed boundary-lattice Boltzmann method (IB-LBM is presented in this work to simulate the thermal flow problems. In current approach, the flow field is resolved by using our recently developed boundary condition-enforced IB-LBM (Wu and Shu, (2009. The nonslip boundary condition on the solid boundary is enforced in simulation. At the same time, to capture the temperature development, the conventional energy equation is resolved. To model the effect of immersed boundary on temperature field, the heat source term is introduced. Different from previous studies, the heat source term is set as unknown rather than predetermined. Inspired by the idea in (Wu and Shu, (2009, the unknown is calculated in such a way that the temperature at the boundary interpolated from the corrected temperature field accurately satisfies the thermal boundary condition. In addition, based on the resolved temperature correction, an efficient way to compute the local and average Nusselt numbers is also proposed in this work. As compared with traditional implementation, no approximation for temperature gradients is required. To validate the present method, the numerical simulations of forced convection are carried out. The obtained results show good agreement with data in the literature.

  11. Unification of field theory and maximum entropy methods for learning probability densities

    OpenAIRE

    Kinney, Justin B.

    2014-01-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy de...

  12. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  13. Solar-cycle Variations of Meridional Flows in the Solar Convection Zone Using Helioseismic Methods

    Science.gov (United States)

    Lin, Chia-Hsien; Chou, Dean-Yi

    2018-06-01

    The solar meridional flow is an axisymmetric flow in solar meridional planes, extending through the convection zone. Here we study its solar-cycle variations in the convection zone using SOHO/MDI helioseismic data from 1996 to 2010, including two solar minima and one maximum. The travel-time difference between northward and southward acoustic waves is related to the meridional flow along the wave path. Applying the ray approximation and the SOLA inversion method to the travel-time difference measured in a previous study, we obtain the meridional flow distributions in 0.67 ≤ r ≤ 0.96R ⊙ at the minimum and maximum. At the minimum, the flow has a three-layer structure: poleward in the upper convection zone, equatorward in the middle convection zone, and poleward again in the lower convection zone. The flow speed is close to zero within the error bar near the base of the convection zone. The flow distribution changes significantly from the minimum to the maximum. The change above 0.9R ⊙ shows two phenomena: first, the poleward flow speed is reduced at the maximum; second, an additional convergent flow centered at the active latitudes is generated at the maximum. These two phenomena are consistent with the surface meridional flow reported in previous studies. The change in flow extends all the way down to the base of the convection zone, and the pattern of the change below 0.9R ⊙ is more complicated. However, it is clear that the active latitudes play a role in the flow change: the changes in flow speed below and above the active latitudes have opposite signs. This suggests that magnetic fields could be responsible for the flow change.

  14. Maximum Margin Clustering of Hyperspectral Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  15. Practical flow cytometry

    National Research Council Canada - National Science Library

    Shapiro, Howard M

    2003-01-01

    ... ... Conflict: Resolution ... 1.3 Problem Number One: Finding The Cell(s) ... Flow Cytometry: Quick on the Trigger ... The Main Event ... The Pulse Quickens, the Plot Thickens ... 1.4 Flow Cytometry: ...

  16. Redox flow batteries with serpentine flow fields: Distributions of electrolyte flow reactant penetration into the porous carbon electrodes and effects on performance

    Science.gov (United States)

    Ke, Xinyou; Prahl, Joseph M.; Alexander, J. Iwan D.; Savinell, Robert F.

    2018-04-01

    Redox flow batteries with flow field designs have been demonstrated to boost their capacities to deliver high current density and power density in medium and large-scale energy storage applications. Nevertheless, the fundamental mechanisms involved with improved current density in flow batteries with serpentine flow field designs have been not fully understood. Here we report a three-dimensional model of a serpentine flow field over a porous carbon electrode to examine the distributions of pressure driven electrolyte flow penetrations into the porous carbon electrodes. We also estimate the maximum current densities associated with stoichiometric availability of electrolyte reactant flow penetrations through the porous carbon electrodes. The results predict reasonably well observed experimental data without using any adjustable parameters. This fundamental work on electrolyte flow distributions of limiting reactant availability will contribute to a better understanding of limits on electrochemical performance in flow batteries with serpentine flow field designs and should be helpful to optimizing flow batteries.

  17. Spatial distribution of impacts to channel bed mobility due to flow regulation, Kootenai River, USA

    Science.gov (United States)

    Michael Burke; Klaus Jorde; John M. Buffington; Jeffrey H. Braatne; Rohan Benjakar

    2006-01-01

    The regulated hydrograph of the Kootenai River between Libby Dam and Kootenay Lake has altered the natural flow regime, resulting in a significant decrease in maximum flows (60% net reduction in median 1-day annual maximum, and 77%-84% net reductions in median monthly flows for the historic peak flow months of May and June, respectively). Other key hydrologic...

  18. A Maximum Power Transfer Tracking Method for WPT Systems with Coupling Coefficient Identification Considering Two-Value Problem

    Directory of Open Access Journals (Sweden)

    Xin Dai

    2017-10-01

    Full Text Available Maximum power transfer tracking (MPTT is meant to track the maximum power point during the system operation of wireless power transfer (WPT systems. Traditionally, MPTT is achieved by impedance matching at the secondary side when the load resistance is varied. However, due to a loosely coupling characteristic, the variation of coupling coefficient will certainly affect the performance of impedance matching, therefore MPTT will fail accordingly. This paper presents an identification method of coupling coefficient for MPTT in WPT systems. Especially, the two-value issue during the identification is considered. The identification approach is easy to implement because it does not require additional circuit. Furthermore, MPTT is easy to realize because only two easily measured DC parameters are needed. The detailed identification procedure corresponding to the two-value issue and the maximum power transfer tracking process are presented, and both the simulation analysis and experimental results verified the identification method and MPTT.

  19. Statistical Mechanics of the Geometric Control of Flow Topology in Two-Dimensional Turbulence

    Science.gov (United States)

    Nadiga, Balasubramanya; Loxley, Peter

    2013-04-01

    We apply the principle of maximum entropy to two dimensional turbulence in a new fashion to predict the effect of geometry on flow topology. We consider two prototypical regimes of turbulence that lead to frequently observed self-organized coherent structures. Our theory predicts bistable behavior that exhibits hysteresis and large abrupt changes in flow topology in one regime; the other regime is predicted to exhibit monstable behavior with a continuous change of flow topology. The predictions are confirmed in fully nonlinear numerical simulations of the two-dimensional Navier-Stokes equation. These results suggest an explanation of the low frequency regime transitions that have been observed in the non-equilibrium setting of this problem. Following further development in the non-equilibrium context, we expect that insights developed in this problem should be useful in developing a better understanding of the phenomenon of low frequency regime transitions that is a pervasive feature of the weather and climate systems. Familiar occurrences of this phenomenon---wherein extreme and abrupt qualitative changes occur, seemingly randomly, after very long periods of apparent stability---include blocking in the extra-tropical winter atmosphere, the bimodality of the Kuroshio extension system, the Dansgaard-Oeschger events, and the glacial-interglacial transitions.

  20. Advancements in flow-induced vibration research and design criteria

    International Nuclear Information System (INIS)

    Pettigrew, M.J.

    2009-01-01

    Two-phase flow exists in many nuclear components and, in particular, steam generators. So far relatively little research work has been done on two-phase flow-induced vibration probably because it is difficult to do. Two-phase flows are not homogeneous and are governed by an additional parameter called void fraction. This can lead to different flow patterns or regimes that can change completely the vibration behaviour. Fluidelastic instability, random turbulence excitation and detailed flow characteristics are being investigated in tube bundles subjected to two-phase cross flow. Fluidelastic instability of a tube bundle preferentially flexible in the flow direction was observed probably for the first time. This is particularly relevant to the problem of in-plane vibration of nuclear steam generator U-tubes and has resulted in changes in our design criteria. Unexpected quasi-periodic excitation forces were also measured in the tube bundle. These are attributed to an alternating wake in the lift direction and to fluctuating momentum flux in the drag direction. Vibration damping due to two-phase flow is very dependent on void fraction and appears directly related to the interface surface area between phases. Maximum damping values correspond to the transitions between flow regimes. Fibre optic probes were developed to measure the characteristics of two-phase flows. These probes are used to take detailed measurements in a triangular array of tubes in cross flow. The results show that the flow tends to stream between the tubes. These studies have yielded interesting results but have raised more questions that could lead to improved design criteria. The more puzzling results will be discussed in this presentation. Some of the dynamic phenomena will be illustrated by animation. (author)

  1. 3D-CFD Simulation of Confined Cross-Flow Injection Process Using Single Piston Pump

    Directory of Open Access Journals (Sweden)

    M. Elashmawy

    2017-12-01

    Full Text Available Injection process into a confined cross flow is quite important for many applications including chemical engineering and water desalination technology. The aim of this study is to investigate the performance of the injection process into a confined cross-flow of a round pipe using a single piston injection pump. A computational fluid dynamics (CFD analysis has been carried out to investigate the effect of the locations of the maximum velocity and minimum pressure on the confined cross-flow process. The jet trajectory is analyzed and related to the injection pump shaft angle of rotation during the injection duty cycle by focusing on the maximum instant injection flow of the piston action. Results indicate a low effect of the jet trajectory within the range related to the injection pump operational conditions. Constant cross-flow was used and injection flow is altered to vary the jet to line flow ratio (QR. The maximum jet trajectory exhibits low penetration inside the cross-flow. The results showed three regions of the flow ratio effect zones with different behaviors. Results also showed that getting closer to the injection port causes a significant decrease on the locations of the maximum velocity and minimum pressure.

  2. The mean Evershed flow

    Science.gov (United States)

    Hu, W.-R.

    1984-09-01

    The paper gives a theoretical analysis of the overall characteristics of the Evershed flow (one of the main features of sunspots), with particular attention given to its outward flow from the umbra in the photosphere, reaching a maximum somewhere in the penumbra, and decreasing rapidly further out, and its inward flow of a comparable magnitude in chromosphere. Because the inertial force of the flow is small, the relevant dynamic process can be divided into a base state and a perturbation. The base-state solution yields the equilibrium relations between the pressure gradient, the Lorentz force, and gravity, and the flow law. The perturbation describes the force driving the Evershed flow. Since the pressure gradient in the base state is already in equilibrium with the Lorentz force and the gravity, the driving force of the mean Evershed flow is small.

  3. Numberical Solution to Transient Heat Flow Problems

    Science.gov (United States)

    Kobiske, Ronald A.; Hock, Jeffrey L.

    1973-01-01

    Discusses the reduction of the one- and three-dimensional diffusion equation to the difference equation and its stability, convergence, and heat-flow applications under different boundary conditions. Indicates the usefulness of this presentation for beginning students of physics and engineering as well as college teachers. (CC)

  4. Demonstration of robust micromachined jet technology and its application to realistic flow control problems

    International Nuclear Information System (INIS)

    Chang, Sung Pil

    2006-01-01

    This paper describes the demonstration of successful fabrication and initial characterization of micromachined pressure sensors and micromachined jets (microjets) fabricated for use in macro flow control and other applications. In this work, the microfabrication technology was investigated to create a micromachined fluidic control system with a goal of application in practical fluids problems, such as UAV (Unmanned Aerial Vehicle)-scale aerodynamic control. Approaches of this work include : (1) the development of suitable micromachined synthetic jets (microjets) as actuators, which obviate the need to physically extend micromachined structures into an external flow ; and (2) a non-silicon alternative micromachining fabrication technology based on metallic substrates and lamination (in addition to traditional MEMS technologies) which will allow the realization of larger scale, more robust structures and larger array active areas for fluidic systems. As an initial study, an array of MEMS pressure sensors and an array of MEMS modulators for orifice-based control of microjets have been fabricated, and characterized. Both pressure sensors and modulators have been built using stainless steel as a substrate and a combination of lamination and traditional micromachining processes as fabrication technologies

  5. Demonstration of robust micromachined jet technology and its application to realistic flow control problems

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Sung Pil [Inha University, Incheon (Korea, Republic of)

    2006-04-15

    This paper describes the demonstration of successful fabrication and initial characterization of micromachined pressure sensors and micromachined jets (microjets) fabricated for use in macro flow control and other applications. In this work, the microfabrication technology was investigated to create a micromachined fluidic control system with a goal of application in practical fluids problems, such as UAV (Unmanned Aerial Vehicle)-scale aerodynamic control. Approaches of this work include : (1) the development of suitable micromachined synthetic jets (microjets) as actuators, which obviate the need to physically extend micromachined structures into an external flow ; and (2) a non-silicon alternative micromachining fabrication technology based on metallic substrates and lamination (in addition to traditional MEMS technologies) which will allow the realization of larger scale, more robust structures and larger array active areas for fluidic systems. As an initial study, an array of MEMS pressure sensors and an array of MEMS modulators for orifice-based control of microjets have been fabricated, and characterized. Both pressure sensors and modulators have been built using stainless steel as a substrate and a combination of lamination and traditional micromachining processes as fabrication technologies.

  6. Determining the number of fingers in the lifting Hele-Shaw problem

    Science.gov (United States)

    Miranda, Jose; Dias, Eduardo

    2013-11-01

    The lifting Hele-Shaw cell flow is a variation of the celebrated radial viscous fingering problem for which the upper cell plate is lifted uniformly at a specified rate. This procedure causes the formation of intricate interfacial patterns. Most theoretical studies determine the total number of emerging fingers by maximizing the linear growth rate, but this generates discrepancies between theory and experiments. In this work, we tackle the number of fingers selection problem in the lifting Hele-Shaw cell by employing the recently proposed maximum-amplitude criterion. Our linear stability analysis accounts for the action of capillary, viscous normal stresses, and wetting effects, as well as the cell confinement. The comparison of our results with very precise laboratory measurements for the total number of fingers shows a significantly improved agreement between theoretical predictions and experimental data. We thank CNPq (Brazilian Sponsor) for financial support.

  7. Modeling on bubbly to churn flow pattern transition for vertical upward flows in narrow rectangular channel

    International Nuclear Information System (INIS)

    Wang Yanlin; Chen Bingde; Huang Yanping; Wang Junfeng

    2011-01-01

    A theoretical model was developed to predict the bubbly to churn flow pattern transition for vertical upward flows in narrow rectangular channel. The model was developed based on the imbalance theory of Helmholtz and some reasonable assumptions. The maximum ideal bubble in narrow rectangular channel and the thermal hydraulics boundary condition leading to bubbly flow to churn flow pattern transition was calculated. The model was validated by experimental data from previous researches. Comparison between predicted result and experimental result shows a reasonable good agreement. (author)

  8. Heat flow at the Platanares, Honduras, geothermal site

    Science.gov (United States)

    Meert, Joseph G.; Smith, Douglas L.

    1991-03-01

    Three boreholes, PLTG-1, PLTG-2 and PLTG-3, were drilled in the Platanares, Honduras geothermal system to evaluate the geothermal energy potential of the site. The maximum reservoir temperature was previously estimated at 225-240°C using various types of chemical and isotopic geothermometry. Geothermal gradients of 139-239°C/km, calculated from two segments of the temperature-depth profile for borehole PLTG-2, were used to project a minimum depth to the geothermal reservoir of 1.2-1.7 km. Borehole PLTG-1 exhibited an erratic temperature distribution attributed to fluid movement through a series of isolated horizontal and subhorizontal fractures. The maximum measured temperature in borehole PLTG-1 was 150.4°C, and in PLTG-2 the maximum measured temperature was 104.3°C. PLTG-3 was drilled after this study and the maximum recorded temperature of 165°C is similar to the temperature encountered in PLTG-1. Heat flow values of 392 mWm -2 and 266 mWm -2 represent the first directly-measured heat flow values for Honduras and northen Central America. Radioactive heat generation, based on gamma-ray analyses of uranium, thorium and potassium in five core samples, is less than 2.0 μWm -3 and does not appear to be a major source of the high heat flow. Several authors have proposed a variety of extensional tectonic environments for western Honduras and these heat flow values, along with published estimates of heat flow, are supportive of this type of tectonic regime.

  9. Determination of maximum reactor power level consistent with the requirement that flow reversal occurs without fuel damage

    International Nuclear Information System (INIS)

    Rao, D.V.; Darby, J.L.; Ross, S.B.; Clark, R.A.

    1990-01-01

    The High Flux Beam Reactor (HFBR) operated by Brookhaven National Laboratory (BNL) employs forced downflow for heat removal during normal operation. In the event of total loss of forced flow, the reactor will shutdown and the flow reversal valves open. When the downward core flow becomes sufficiently small then the opposing thermal buoyancy induces flow reversal leading to decay heat removal by natural convection. There is some uncertainty as to whether the natural circulation is adequate for decay heat removal after 60 MW operation. BNL- staff carried out a series of calculations to establish the adequacy of flow reversal to remove decay heat. Their calculations are based on a natural convective CHF model. The primary purpose of the present calculations is to review the accuracy and applicability of Fauske's CHF model for the HFBR, and the assumptions and methodology employed by BNL-staff to determine the heat removal limit in the HFBR during a flow reversal and natural convection situation

  10. Simulation of the WWER-440/213 maximum credible accident at the EhNITs stand

    International Nuclear Information System (INIS)

    Blinkov, V.N.; Melikhov, O.I.; Melikhov, V.I.; Davydov, M.V.; Sokolin, A.V.; Shchepetil'nikov, Eh.Yu.

    2000-01-01

    The calculations of thermohydraulic processes through the ATHLET code for determining optimal conditions for modeling the coolant leakage at the EhNITs stand by the maximum credible accident at the NPP with WWER-440/213 reactor are presented. The diameters of the nozzle at the stand, whereby the local criterion of coincidence with the data on the NPP (by the maximum flow) and integral criterion of coincidence (by the mass and energy of the coolant, effluent during 10 s) are determined in the process of parametric calculations [ru

  11. Bootstrap-based Support of HGT Inferred by Maximum Parsimony

    Directory of Open Access Journals (Sweden)

    Nakhleh Luay

    2010-05-01

    Full Text Available Abstract Background Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. Results In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. Conclusions We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/, and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  12. Bootstrap-based support of HGT inferred by maximum parsimony.

    Science.gov (United States)

    Park, Hyun Jung; Jin, Guohua; Nakhleh, Luay

    2010-05-05

    Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/), and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  13. Variation of Probable Maximum Precipitation in Brazos River Basin, TX

    Science.gov (United States)

    Bhatia, N.; Singh, V. P.

    2017-12-01

    The Brazos River basin, the second-largest river basin by area in Texas, generates the highest amount of flow volume of any river in a given year in Texas. With its headwaters located at the confluence of Double Mountain and Salt forks in Stonewall County, the third-longest flowline of the Brazos River traverses within narrow valleys in the area of rolling topography of west Texas, and flows through rugged terrains in mainly featureless plains of central Texas, before its confluence with Gulf of Mexico. Along its major flow network, the river basin covers six different climate regions characterized on the basis of similar attributes of vegetation, temperature, humidity, rainfall, and seasonal weather changes, by National Oceanic and Atmospheric Administration (NOAA). Our previous research on Texas climatology illustrated intensified precipitation regimes, which tend to result in extreme flood events. Such events have caused huge losses of lives and infrastructure in the Brazos River basin. Therefore, a region-specific investigation is required for analyzing precipitation regimes along the geographically-diverse river network. Owing to the topographical and hydroclimatological variations along the flow network, 24-hour Probable Maximum Precipitation (PMP) was estimated for different hydrologic units along the river network, using the revised Hershfield's method devised by Lan et al. (2017). The method incorporates the use of a standardized variable describing the maximum deviation from the average of a sample scaled by the standard deviation of the sample. The hydrometeorological literature identifies this method as more reasonable and consistent with the frequency equation. With respect to the calculation of stable data size required for statistically reliable results, this study also quantified the respective uncertainty associated with PMP values in different hydrologic units. The corresponding range of return periods of PMPs in different hydrologic units was

  14. Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems

    Directory of Open Access Journals (Sweden)

    Hakan A. Çırpan

    2002-05-01

    Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.

  15. Sequential and Parallel Algorithms for Finding a Maximum Convex Polygon

    DEFF Research Database (Denmark)

    Fischer, Paul

    1997-01-01

    This paper investigates the problem where one is given a finite set of n points in the plane each of which is labeled either ?positive? or ?negative?. We consider bounded convex polygons, the vertices of which are positive points and which do not contain any negative point. It is shown how...... such a polygon which is maximal with respect to area can be found in time O(n³ log n). With the same running time one can also find such a polygon which contains a maximum number of positive points. If, in addition, the number of vertices of the polygon is restricted to be at most M, then the running time...... becomes O(M n³ log n). It is also shown how to find a maximum convex polygon which contains a given point in time O(n³ log n). Two parallel algorithms for the basic problem are also presented. The first one runs in time O(n log n) using O(n²) processors, the second one has polylogarithmic time but needs O...

  16. Unsteady separated stagnation-point flow and heat transfer of a viscous fluid over a moving flat surface

    Science.gov (United States)

    Dholey, S.

    2018-04-01

    In this paper, we have investigated numerically the laminar unsteady separated stagnation-point flow and heat transfer of a viscous fluid over a moving flat surface in the presence of a time dependent free stream velocity which causes the unsteadiness of this flow problem. The plate is assumed to move in the same or opposite direction of the free stream velocity. The flow is therefore governed by the velocity ratio parameter λ (ratio of the plate velocity to the free stream velocity) and the unsteadiness parameter β. When the plate surface moves in the same direction of the free stream velocity (i.e., when λ > 0), the solution of this flow problem continues for any given value of β. On the other hand, when they move in opposite directions (i.e., when λ heat transfer analysis is that for a given value of λ(= 0), first the heat transfer rate increases with the increase of the Prandtl number Pr and after attaining a maximum value, it decreases and finally tends to be zero for large values of Pr depending upon the values of β > 0. On the contrary, for a given value of β(≤ 0), the rate of heat transfer increases consistently with the increase of Pr.

  17. Recent Results from Analysis of Flow Structures and Energy Modes Induced by Viscous Wave around a Surface-Piercing Cylinder

    Directory of Open Access Journals (Sweden)

    Giancarlo Alfonsi

    2017-01-01

    Full Text Available Due to its relevance in ocean engineering, the subject of the flow field generated by water waves around a vertical circular cylinder piercing the free surface has recently started to be considered by several research groups. In particular, we studied this problem starting from the velocity-potential framework, then the implementation of the numerical solution of the Euler equations in their velocity-pressure formulation, and finally the performance of the integration of the Navier-Stokes equations in primitive variables. We also developed and applied methods of extraction of the flow coherent structures and most energetic modes. In this work, we present some new results of our research directed, in particular, toward the clarification of the main nonintuitive character of the phenomenon of interaction between a wave and a surface-piercing cylinder, namely, the fact that the wave exerts its maximum force and exhibits its maximum run-up on the cylindrical obstacle at different instants. The understanding of this phenomenon becomes of crucial importance in the perspective of governing the entity of the wave run-up on the obstacle by means of wave-flow-control techniques.

  18. Mathematical models for two-phase stratified pipe flow

    Energy Technology Data Exchange (ETDEWEB)

    Biberg, Dag

    2005-06-01

    The simultaneous transport of oil, gas and water in a single multiphase flow pipe line has for economical and practical reasons become common practice in the gas and oil fields operated by the oil industry. The optimal design and safe operation of these pipe lines require reliable estimates of liquid inventory, pressure drop and flow regime. Computer simulations of multiphase pipe flow have thus become an important design tool for field developments. Computer simulations yielding on-line monitoring and look ahead predictions are invaluable in day-to-day field management. Inaccurate predictions may have large consequences. The accuracy and reliability of multiphase pipe flow models are thus important issues. Simulating events in large pipelines or pipeline systems is relatively computer intensive. Pipe-lines carrying e.g. gas and liquefied gas (condensate) may cover distances of several hundred km in which transient phenomena may go on for months. The evaluation times associated with contemporary 3-D CFD models are thus not compatible with field applications. Multiphase flow lines are therefore normally simulated using specially dedicated 1-D models. The closure relations of multiphase pipe flow models are mainly based on lab data. The maximum pipe inner diameter, pressure and temperature in a multiphase pipe flow lab is limited to approximately 0.3 m, 90 bar and 60{sup o}C respectively. The corresponding field values are, however, much higher i.e.: 1 m, 1000 bar and 200{sup o}C respectively. Lab data does thus not cover the actual field conditions. Field predictions are consequently frequently based on model extrapolation. Applying field data or establishing more advanced labs will not solve this problem. It is in fact not practically possible to acquire sufficient data to cover all aspects of multiphase pipe flow. The parameter range involved is simply too large. Liquid levels and pressure drop in three-phase flow are e.g. determined by 13 dimensionless parameters

  19. Maximum Safety Regenerative Power Tracking for DC Traction Power Systems

    Directory of Open Access Journals (Sweden)

    Guifu Du

    2017-02-01

    Full Text Available Direct current (DC traction power systems are widely used in metro transport systems, with running rails usually being used as return conductors. When traction current flows through the running rails, a potential voltage known as “rail potential” is generated between the rails and ground. Currently, abnormal rises of rail potential exist in many railway lines during the operation of railway systems. Excessively high rail potentials pose a threat to human life and to devices connected to the rails. In this paper, the effect of regenerative power distribution on rail potential is analyzed. Maximum safety regenerative power tracking is proposed for the control of maximum absolute rail potential and energy consumption during the operation of DC traction power systems. The dwell time of multiple trains at each station and the trigger voltage of the regenerative energy absorbing device (READ are optimized based on an improved particle swarm optimization (PSO algorithm to manage the distribution of regenerative power. In this way, the maximum absolute rail potential and energy consumption of DC traction power systems can be reduced. The operation data of Guangzhou Metro Line 2 are used in the simulations, and the results show that the scheme can reduce the maximum absolute rail potential and energy consumption effectively and guarantee the safety in energy saving of DC traction power systems.

  20. Matrix interdiction problem

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Feng [Los Alamos National Laboratory; Kasiviswanathan, Shiva [Los Alamos National Laboratory

    2010-01-01

    In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove k columns such that the sum over all rows of the maximum entry in each row is minimized. This combinatorial problem is closely related to bipartite network interdiction problem which can be applied to prioritize the border checkpoints in order to minimize the probability that an adversary can successfully cross the border. After introducing the matrix interdiction problem, we will prove the problem is NP-hard, and even NP-hard to approximate with an additive n{gamma} factor for a fixed constant {gamma}. We also present an algorithm for this problem that achieves a factor of (n-k) mUltiplicative approximation ratio.

  1. Mothers' Maximum Drinks Ever Consumed in 24 Hours Predicts Mental Health Problems in Adolescent Offspring

    Science.gov (United States)

    Malone, Stephen M.; McGue, Matt; Iacono, William G.

    2010-01-01

    Background: The maximum number of alcoholic drinks consumed in a single 24-hr period is an alcoholism-related phenotype with both face and empirical validity. It has been associated with severity of withdrawal symptoms and sensitivity to alcohol, genes implicated in alcohol metabolism, and amplitude of a measure of brain activity associated with…

  2. Robust and Optimal Control of Magnetic Microparticles inside Fluidic Channels with Time-Varying Flow Rates

    Directory of Open Access Journals (Sweden)

    Islam S.M. Khalil

    2016-06-01

    Full Text Available Targeted therapy using magnetic microparticles and nanoparticles has the potential to mitigate the negative side-effects associated with conventional medical treatment. Major technological challenges still need to be addressed in order to translate these particles into in vivo applications. For example, magnetic particles need to be navigated controllably in vessels against flowing streams of body fluid. This paper describes the motion control of paramagnetic microparticles in the flowing streams of fluidic channels with time-varying flow rates (maximum flow is 35 ml.hr−1. This control is designed using a magnetic-based proportional-derivative (PD control system to compensate for the time-varying flow inside the channels (with width and depth of 2 mm and 1.5 mm, respectively. First, we achieve point-to-point motion control against and along flow rates of 4 ml.hr−1, 6 ml.hr−1, 17 ml.hr−1, and 35 ml.hr−1. The average speeds of single microparticle (with average diameter of 100 μm against flow rates of 6 ml.hr−1 and 30 ml.hr−1 are calculated to be 45 μm.s−1 and 15 μm.s−1, respectively. Second, we implement PD control with disturbance estimation and compensation. This control decreases the steady-state error by 50%, 70%, 73%, and 78% at flow rates of 4 ml.hr−1, 6 ml.hr−1, 17 ml.hr−1, and 35 ml.hr−1, respectively. Finally, we consider the problem of finding the optimal path (minimal kinetic energy between two points using calculus of variation, against the mentioned flow rates. Not only do we find that an optimal path between two collinear points with the direction of maximum flow (middle of the fluidic channel decreases the rise time of the microparticles, but we also decrease the input current that is supplied to the electromagnetic coils by minimizing the kinetic energy of the microparticles, compared to a PD control with disturbance compensation.

  3. Studies in boiling heat transfer in two phase flow through tube arrays: nucleate boiling heat transfer coefficient and maximum heat flux as a function of velocity and quality of Freon-113

    International Nuclear Information System (INIS)

    Rahmani, R.

    1983-01-01

    The nucleate boiling heat-transfer coefficient and the maximum heat flux were studied experimentally as functions of velocity, quality and heater diameter for single-phase flow, and two-phase flow of Freon-113 (trichlorotrifluorethane). Results show: (1) peak heat flux: over 300 measured peak heat flux data from two 0.875-in. and four 0.625-in.-diameter heaters indicated that: (a) for pool boiling, single-phase and two-phase forced convection boiling the only parameter (among hysteresis, rate of power increase, aging, presence and proximity of unheated rods) that has a statistically significant effect on the peak heat flux is the velocity. (b) In the velocity range (0 0 position or the point of impact of the incident fluid) and the top (180 0 position) of the test element, respectively

  4. Flow in data racks

    Directory of Open Access Journals (Sweden)

    Manoch Lukáš

    2014-03-01

    Full Text Available This paper deals with the flow in data racks. The aim of this work is to find a new arrangement of elements regulating the flow in the data rack so that the aerodynamic losses and the recirculation zones were minimized. The main reason for solving this problem is to reduce the costs of data racks cooling. Another problem to be solved is a reverse flow in the servers, thus not cooled, occuring due to the underpressure in the recirculation zones. In order to solve the problem, the experimental and numerical model of 27U data rack fitted with 10 pieces of server models with a total input of 10 kW was created. Different configurations of layout of elements affecting the flow in the inlet area of the data rack were compared. Depending on the results achieved, design solutions for the improvement of existing solutions were adopted and verified by numerical simulations.

  5. An integrated approach to combating flow assurance problems

    Energy Technology Data Exchange (ETDEWEB)

    Abney, Laurence; Browne, Alan [Halliburton, Houston, TX (United States)

    2005-07-01

    Any upset to the internal pipe surface of a pipeline can significantly impact both pipeline through-put and energy requirements for maintaining design flow rates. Inefficient flow through pipelines can have a significant negative impact on operating expense (Opex) and the energy requirements necessary to maintain pipeline through-put. Effective flow maintenance helps ensure that Opex remains within budget, processing equipment life is extended and that excessive use of energy is minimized. A number of events can result in debris generation and deposition in a pipeline. Corrosion, hydrate formation, paraffin deposition, asphaltene deposition, development of 'black powder' and scale formation are the most common sources of pipeline debris. Generally, a combination of pigging and chemical treatments is used to remove debris; these two techniques are commonly used in isolation. Incorporation of specialized fluids with enhanced solid-transport capabilities, specialized dispersants, or specialized surfactants can improve the success of routine pigging operations. An array of alternative and often complementary remediation technologies can be used to effect the removal of deposits or even full restrictions from pipelines. These include the application of acids, specialized chemical products, and intrusive interventions techniques. This paper presents a review of methods of integrating existing technologies. (author)

  6. Method and software to solution of inverse and inverse design fluid flow and heat transfer problems is compatible with CFD-software

    Energy Technology Data Exchange (ETDEWEB)

    Krukovsky, P G [Institute of Engineering Thermophysics, National Academy of Sciences of Ukraine, Kiev (Ukraine)

    1998-12-31

    The description of method and software FRIEND which provide a possibility of solution of inverse and inverse design problems on the basis of existing (base) CFD-software for solution of direct problems (in particular, heat-transfer and fluid-flow problems using software PHOENICS) are presented. FRIEND is an independent additional module that widens the operational capacities of the base software unified with this module. This unifying does not require any change or addition to the base software. Interfacing of FRIEND and the base software takes place through input and output files of the base software. A brief description of the computational technique applied for the inverse problem solution, same detailed information on the interfacing of FRIEND and CFD-software and solution results for testing inverse and inverse design problems, obtained using the tandem CFD-software PHOENICS and FRIEND, are presented. (author) 9 refs.

  7. Method and software to solution of inverse and inverse design fluid flow and heat transfer problems is compatible with CFD-software

    Energy Technology Data Exchange (ETDEWEB)

    Krukovsky, P.G. [Institute of Engineering Thermophysics, National Academy of Sciences of Ukraine, Kiev (Ukraine)

    1997-12-31

    The description of method and software FRIEND which provide a possibility of solution of inverse and inverse design problems on the basis of existing (base) CFD-software for solution of direct problems (in particular, heat-transfer and fluid-flow problems using software PHOENICS) are presented. FRIEND is an independent additional module that widens the operational capacities of the base software unified with this module. This unifying does not require any change or addition to the base software. Interfacing of FRIEND and the base software takes place through input and output files of the base software. A brief description of the computational technique applied for the inverse problem solution, same detailed information on the interfacing of FRIEND and CFD-software and solution results for testing inverse and inverse design problems, obtained using the tandem CFD-software PHOENICS and FRIEND, are presented. (author) 9 refs.

  8. Numerical simulation of real-world flows

    Energy Technology Data Exchange (ETDEWEB)

    Hayase, Toshiyuki, E-mail: hayase@ifs.tohoku.ac.jp [Institute of Fluid Science, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai, 980-8577 (Japan)

    2015-10-15

    Obtaining real flow information is important in various fields, but is a difficult issue because measurement data are usually limited in time and space, and computational results usually do not represent the exact state of real flows. Problems inherent in the realization of numerical simulation of real-world flows include the difficulty in representing exact initial and boundary conditions and the difficulty in representing unstable flow characteristics. This article reviews studies dealing with these problems. First, an overview of basic flow measurement methodologies and measurement data interpolation/approximation techniques is presented. Then, studies on methods of integrating numerical simulation and measurement, namely, four-dimensional variational data assimilation (4D-Var), Kalman filters (KFs), state observers, etc are discussed. The first problem is properly solved by these integration methodologies. The second problem can be partially solved with 4D-Var in which only initial and boundary conditions are control parameters. If an appropriate control parameter capable of modifying the dynamical structure of the model is included in the formulation of 4D-Var, unstable modes are properly suppressed and the second problem is solved. The state observer and KFs also solve the second problem by modifying mathematical models to stabilize the unstable modes of the original dynamical system by applying feedback signals. These integration methodologies are now applied in simulation of real-world flows in a wide variety of research fields. Examples are presented for basic fluid dynamics and applications in meteorology, aerospace, medicine, etc. (topical review)

  9. Maximum parsimony, substitution model, and probability phylogenetic trees.

    Science.gov (United States)

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  10. Filtering Undesirable Flows in Networks

    NARCIS (Netherlands)

    Polevoy, G.; Trajanovski, S.; Grosso, P.; de Laat, C.; Gao, X.; Du, H.; Han, M.

    2017-01-01

    We study the problem of fully mitigating the effects of denial of service by filtering the minimum necessary set of the undesirable flows. First, we model this problem and then we concentrate on a subproblem where every good flow has a bottleneck. We prove that unless P=NP, this subproblem is

  11. Secretary Problems: Weights and Discounts

    OpenAIRE

    Babaioff, M.; Dinitz, M.; Gupta, A.; Immorlica, Nicole Simone; Talwar, K.

    2009-01-01

    textabstractThe classical secretary problem studies the problem of selecting online an element (a “secretary”) with maximum value in a randomly ordered sequence. The difficulty lies in the fact that an element must be either selected or discarded upon its arrival, and this decision is irrevocable. Constant-competitive algorithms are known for the classical secretary problems and several variants. We study the following two extensions of the secretary problem: In the discounted secretary probl...

  12. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  13. Secretary Problems: Weights and Discounts

    NARCIS (Netherlands)

    M. Babaioff; M. Dinitz; A. Gupta; N.S. Immorlica (Nicole Simone); K. Talwar

    2009-01-01

    textabstractThe classical secretary problem studies the problem of selecting online an element (a “secretary”) with maximum value in a randomly ordered sequence. The difficulty lies in the fact that an element must be either selected or discarded upon its arrival, and this decision is irrevocable.

  14. An extended heterogeneous car-following model accounting for anticipation driving behavior and mixed maximum speeds

    Science.gov (United States)

    Sun, Fengxin; Wang, Jufeng; Cheng, Rongjun; Ge, Hongxia

    2018-02-01

    The optimal driving speeds of the different vehicles may be different for the same headway. In the optimal velocity function of the optimal velocity (OV) model, the maximum speed vmax is an important parameter determining the optimal driving speed. A vehicle with higher maximum speed is more willing to drive faster than that with lower maximum speed in similar situation. By incorporating the anticipation driving behavior of relative velocity and mixed maximum speeds of different percentages into optimal velocity function, an extended heterogeneous car-following model is presented in this paper. The analytical linear stable condition for this extended heterogeneous traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulted from the cooperation between anticipation driving behavior and heterogeneous maximum speeds in the optimal velocity function. The analytical and numerical results all demonstrate that strengthening driver's anticipation effect can improve the stability of heterogeneous traffic flow, and increasing the lowest value in the mixed maximum speeds will result in more instability, but increasing the value or proportion of the part already having higher maximum speed will cause different stabilities at high or low traffic densities.

  15. Application of Bayesian Maximum Entropy Filter in parameter calibration of groundwater flow model in PingTung Plain

    Science.gov (United States)

    Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung

    2017-04-01

    Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.

  16. Sequential flow membraneless microfluidic fuel cell with porous electrodes

    Energy Technology Data Exchange (ETDEWEB)

    Salloum, Kamil S.; Posner, Jonathan D. [Department of Mechanical and Aerospace Engineering, Arizona State University, Tempe, AZ 85287-6106 (United States); Hayes, Joel R.; Friesen, Cody A. [School of Materials, Arizona State University, Tempe, AZ 85287-8706 (United States)

    2008-05-15

    A novel convective flow membraneless microfluidic fuel cell with porous disk electrodes is described. In this fuel cell design, the fuel flows radially outward through a thin disk shaped anode and across a gap to a ring shaped cathode. An oxidant is introduced into the gap between anode and cathode and advects radially outward to the cathode. This fuel cell differs from previous membraneless designs in that the fuel and the oxidant flow in series, rather than in parallel, enabling independent control over the fuel and oxidant flow rate and the electrode areas. The cell uses formic acid as a fuel and potassium permanganate as the oxidant, both contained in a sulfuric acid electrolyte. The flow velocity field is examined using microscale particle image velocimetry and shown to be nearly axisymmetric and steady. The results show that increasing the electrolyte concentration reduces the cell Ohmic resistance, resulting in larger maximum currents and peak power densities. Increasing the flow rate delays the onset of mass transport and reduces Ohmic losses resulting in larger maximum currents and peak power densities. An average open circuit potential of 1.2 V is obtained with maximum current and power densities of 5.35 mA cm{sup -2} and 2.8 mW cm{sup -2}, respectively (cell electrode area of 4.3 cm{sup 2}). At a flow rate of 100 {mu}L min{sup -1} a fuel utilization of 58% is obtained. (author)

  17. Gas/liquid flow configurations

    International Nuclear Information System (INIS)

    Bonin, Jacques; Fitremann, J.-M.

    1978-01-01

    Prediction of flow configurations (morphology) for gas/liquid or liquid/vapour mixtures is an important industrial problem which is not yet fully understood. The ''Flow Configurations'' Seminar of Societe Hydrotechnique de France has framed recommendations for investigation of potential industrial applications for flow configurations [fr

  18. On the complexity of the balanced vertex ordering problem

    Directory of Open Access Journals (Sweden)

    Jan Kara

    2007-01-01

    Full Text Available We consider the problem of finding a balanced ordering of the vertices of a graph. More precisely, we want to minimise the sum, taken over all vertices v, of the difference between the number of neighbours to the left and right of v. This problem, which has applications in graph drawing, was recently introduced by Biedl et al. [Discrete Applied Math. 148:27--48, 2005]. They proved that the problem is solvable in polynomial time for graphs with maximum degree three, but NP-hard for graphs with maximum degree six. One of our main results is to close the gap in these results, by proving NP-hardness for graphs with maximum degree four. Furthermore, we prove that the problem remains NP-hard for planar graphs with maximum degree four and for 5-regular graphs. On the other hand, we introduce a polynomial time algorithm that determines whetherthere is a vertex ordering with total imbalance smaller than a fixed constant, and a polynomial time algorithm that determines whether a given multigraph with even degrees has an `almost balanced' ordering.

  19. Flow shop scheduling with heterogeneous workers

    OpenAIRE

    Benavides, Alexander J.; Ritt, Marcus; Miralles Insa, Cristóbal Javier

    2014-01-01

    We propose an extension to the flow shop scheduling problem named Heterogeneous Flow Shop Scheduling Problem (Het-FSSP), where two simultaneous issues have to be resolved: finding the best worker assignment to the workstations, and solving the corresponding scheduling problem. This problem is motivated by Sheltered Work centers for Disabled, whose main objective is the labor integration of persons with disabilities, an important aim not only for these centers but for any company d...

  20. Modelling of natural convection flows with large temperature differences: a benchmark problem for low Mach number solvers. Part. 1 reference solutions

    International Nuclear Information System (INIS)

    Le Quere, P.; Weisman, C.; Paillere, H.; Vierendeels, J.; Dick, E.; Becker, R.; Braack, M.; Locke, J.

    2005-01-01

    Heat transfer by natural convection and conduction in enclosures occurs in numerous practical situations including the cooling of nuclear reactors. For large temperature difference, the flow becomes compressible with a strong coupling between the continuity, the momentum and the energy equations through the equation of state, and its properties (viscosity, heat conductivity) also vary with the temperature, making the Boussinesq flow approximation inappropriate and inaccurate. There are very few reference solutions in the literature on non-Boussinesq natural convection flows. We propose here a test case problem which extends the well-known De Vahl Davis differentially heated square cavity problem to the case of large temperature differences for which the Boussinesq approximation is no longer valid. The paper is split in two parts: in this first part, we propose as yet unpublished reference solutions for cases characterized by a non-dimensional temperature difference of 0.6, Ra 10 6 (constant property and variable property cases) and Ra = 10 7 (variable property case). These reference solutions were produced after a first international workshop organized by Cea and LIMSI in January 2000, in which the above authors volunteered to produce accurate numerical solutions from which the present reference solutions could be established. (authors)

  1. Unification of field theory and maximum entropy methods for learning probability densities

    Science.gov (United States)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  2. Unification of field theory and maximum entropy methods for learning probability densities.

    Science.gov (United States)

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  3. On Euler's problem

    International Nuclear Information System (INIS)

    Egorov, Yurii V

    2013-01-01

    We consider the classical problem on the tallest column which was posed by Euler in 1757. Bernoulli-Euler theory serves today as the basis for the design of high buildings. This problem is reduced to the problem of finding the potential for the Sturm-Liouville equation corresponding to the maximum of the first eigenvalue. The problem has been studied by many mathematicians but we give the first rigorous proof of the existence and uniqueness of the optimal column and we give new formulae which let us find it. Our method is based on a new approach consisting in the study of critical points of a related nonlinear functional. Bibliography: 6 titles.

  4. A finite-element model for moving contact line problems in immiscible two-phase flow

    Science.gov (United States)

    Kucala, Alec

    2017-11-01

    Accurate modeling of moving contact line (MCL) problems is imperative in predicting capillary pressure vs. saturation curves, permeability, and preferential flow paths for a variety of applications, including geological carbon storage (GCS) and enhanced oil recovery (EOR). The macroscale movement of the contact line is dependent on the molecular interactions occurring at the three-phase interface, however most MCL problems require resolution at the meso- and macro-scale. A phenomenological model must be developed to account for the microscale interactions, as resolving both the macro- and micro-scale would render most problems computationally intractable. Here, a model for the moving contact line is presented as a weak forcing term in the Navier-Stokes equation and applied directly at the location of the three-phase interface point. The moving interface is tracked with the level set method and discretized using the conformal decomposition finite element method (CDFEM), allowing for the surface tension and the wetting model to be computed at the exact interface location. A variety of verification test cases for simple two- and three-dimensional geometries are presented to validate the current MCL model, which can exhibit grid independence when a proper scaling for the slip length is chosen. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.

  5. Industrial aspects of gas-liquid two-phase flow

    International Nuclear Information System (INIS)

    Hewitt, G.F.

    1977-01-01

    The lecture begins by reviewing the various types of plant in which two phase flow occurs. Specifically, boiling plant, condensing plant and pipelines are reviewed, and the various two phase flow problems occurring in them are described. Of course, many other kinds of chemical engineering plant involve two phase flow, but are somewhat outside the scope of this lecture. This would include distillation columns, vapor-liquid separators, absorption towers etc. Other areas of industrial two phase flow which have been omitted for space reasons from this lecture are those concerned with gas/solids, liquid/solid and liquid/liquid flows. There then follows a description of some of the two phase flow processes which are relevant in industrial equipment and where special problems occur. The topics chosen are as follows: (1) pressure drop; (2) horizontal tubes - separation effects non-uniformites in heat transfer coefficient, effect of bends on dryout; (3) multicomponent mixtures - effects in pool boiling, mass transfer effects in condensation and Marangoni effects; (4) flow distribution - manifold problems in single phase flow, separation effects at a single T-junction in two phase flow and distribution in manifolds in two phase flow; (5) instability - oscillatory instability, special forms of instability in cryogenic systems; (6) nucleate boiling - effect of variability of surface, unresolved problems in forced convective nucleate boiling; and (7) shell side flows - flow patterns, cross flow boiling, condensation in cross flow

  6. A General Stochastic Maximum Principle for SDEs of Mean-field Type

    International Nuclear Information System (INIS)

    Buckdahn, Rainer; Djehiche, Boualem; Li Juan

    2011-01-01

    We study the optimal control for stochastic differential equations (SDEs) of mean-field type, in which the coefficients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-field type. This makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. For a general action space a Peng’s-type stochastic maximum principle (Peng, S.: SIAM J. Control Optim. 2(4), 966–979, 1990) is derived, specifying the necessary conditions for optimality. This maximum principle differs from the classical one in the sense that here the first order adjoint equation turns out to be a linear mean-field backward SDE, while the second order adjoint equation remains the same as in Peng’s stochastic maximum principle.

  7. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  8. A network-flow based valve-switching aware binding algorithm for flow-based microfluidic biochips

    DEFF Research Database (Denmark)

    Tseng, Kai-Han; You, Sheng-Chi; Minhass, Wajid Hassan

    2013-01-01

    -flow based resource binding algorithm based on breadth-first search (BFS) and minimum cost maximum flow (MCMF) in architectural-level synthesis. The experimental results show that our methodology not only makes significant reduction of valve-switching activities but also diminishes the application completion......Designs of flow-based microfluidic biochips are receiving much attention recently because they replace conventional biological automation paradigm and are able to integrate different biochemical analysis functions on a chip. However, as the design complexity increases, a flow-based microfluidic...... biochip needs more chip-integrated micro-valves, i.e., the basic unit of fluid-handling functionality, to manipulate the fluid flow for biochemical applications. Moreover, frequent switching of micro-valves results in decreased reliability. To minimize the valve-switching activities, we develop a network...

  9. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  10. The limit distribution of the maximum increment of a random walk with regularly varying jump size distribution

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Rackauskas, Alfredas

    2010-01-01

    In this paper, we deal with the asymptotic distribution of the maximum increment of a random walk with a regularly varying jump size distribution. This problem is motivated by a long-standing problem on change point detection for epidemic alternatives. It turns out that the limit distribution...... of the maximum increment of the random walk is one of the classical extreme value distributions, the Fréchet distribution. We prove the results in the general framework of point processes and for jump sizes taking values in a separable Banach space...

  11. Maximum principle for a stochastic delayed system involving terminal state constraints.

    Science.gov (United States)

    Wen, Jiaqiang; Shi, Yufeng

    2017-01-01

    We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.

  12. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  13. Twenty-five years of maximum-entropy principle

    Science.gov (United States)

    Kapur, J. N.

    1983-04-01

    The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.

  14. Numerical investigation of flow instability in parallel channels with supercritical water

    International Nuclear Information System (INIS)

    Shitsi, Edward; Debrah, Seth Kofi; Agbodemegbe, Vincent Yao; Ampomah-Amoako, Emmanuel

    2017-01-01

    Highlights: •Supercritical flow instability in parallel channels is investigated. •Flow dynamics and heat transfer characteristics are analyzed. •Mass flow rate, pressure, heating power, and axial power shape have significant effects on flow instability. •Numerical results are validated with experimental results. -- Abstract: SCWR is one of the selected Gen IV reactors purposely for electricity generation in the near future. It is a promising technology with higher efficiency compared to current LWRs but without the challenges of heat transfer and its associated flow instability. Supercritical flow instability is mainly caused by sharp change in the coolant properties around the pseudo-critical point of the working fluid and research into this phenomenon is needed to address concerns of flow instability at supercritical pressures. Flow instability in parallel channels at supercritical pressures is investigated in this paper using a three dimensional (3D) numerical tool (STAR-CCM+). The dynamics characteristics such as amplitude and period of out-of-phase inlet mass flow oscillation at the heated channel inlet, and heat transfer characteristic such as maximum outlet temperature of the heated channel outlet temperature oscillation are discussed. Influences of system parameters such as axial power shape, pressure, mass flow rate, and gravity are discussed based on the obtained mass flow and temperature oscillations. The results show that the system parameters have significant effect on the amplitude of the mass flow oscillation and maximum temperature of the heated outlet temperature oscillation but have little effect on the period of the mass flow oscillation. The amplitude of mass flow oscillation and maximum temperature of the heated channel outlet temperature oscillation increase with heating power. The numerical results when compared to experiment data show that the 3D numerical tool (STAR-CCM+) could capture dynamics and heat transfer characteristics of

  15. Mathematical modeling of swirled flows in industrial applications

    Science.gov (United States)

    Dekterev, A. A.; Gavrilov, A. A.; Sentyabov, A. V.

    2018-03-01

    Swirled flows are widely used in technological devices. Swirling flows are characterized by a wide range of flow regimes. 3D mathematical modeling of flows is widely used in research and design. For correct mathematical modeling of such a flow, it is necessary to use turbulence models, which take into account important features of the flow. Based on the experience of computational modeling of a wide class of problems with swirling flows, recommendations on the use of turbulence models for calculating the applied problems are proposed.

  16. Separation of flow

    CERN Document Server

    Chang, Paul K

    2014-01-01

    Interdisciplinary and Advanced Topics in Science and Engineering, Volume 3: Separation of Flow presents the problem of the separation of fluid flow. This book provides information covering the fields of basic physical processes, analyses, and experiments concerning flow separation.Organized into 12 chapters, this volume begins with an overview of the flow separation on the body surface as discusses in various classical examples. This text then examines the analytical and experimental results of the laminar boundary layer of steady, two-dimensional flows in the subsonic speed range. Other chapt

  17. Energy-Efficient Algorithm for Sensor Networks with Non-Uniform Maximum Transmission Range

    Directory of Open Access Journals (Sweden)

    Yimin Yu

    2011-06-01

    Full Text Available In wireless sensor networks (WSNs, the energy hole problem is a key factor affecting the network lifetime. In a circular multi-hop sensor network (modeled as concentric coronas, the optimal transmission ranges of all coronas can effectively improve network lifetime. In this paper, we investigate WSNs with non-uniform maximum transmission ranges, where sensor nodes deployed in different regions may differ in their maximum transmission range. Then, we propose an Energy-efficient algorithm for Non-uniform Maximum Transmission range (ENMT, which can search approximate optimal transmission ranges of all coronas in order to prolong network lifetime. Furthermore, the simulation results indicate that ENMT performs better than other algorithms.

  18. The use of wavelet transforms in the solution of two-phase flow problems

    International Nuclear Information System (INIS)

    Moridis, G.J.; Nikolaou, M.; You, Yong

    1994-10-01

    In this paper we present the use of wavelets to solve the nonlinear Partial Differential.Equation (PDE) of two-phase flow in one dimension. The wavelet transforms allow a drastically different approach in the discretization of space. In contrast to the traditional trigonometric basis functions, wavelets approximate a function not by cancellation but by placement of wavelets at appropriate locations. When an abrupt chance, such as a shock wave or a spike, occurs in a function, only local coefficients in a wavelet approximation will be affected. The unique feature of wavelets is their Multi-Resolution Analysis (MRA) property, which allows seamless investigational any spatial resolution. The use of wavelets is tested in the solution of the one-dimensional Buckley-Leverett problem against analytical solutions and solutions obtained from standard numerical models. Two classes of wavelet bases (Daubechies and Chui-Wang) and two methods (Galerkin and collocation) are investigated. We determine that the Chui-Wang, wavelets and a collocation method provide the optimum wavelet solution for this type of problem. Increasing the resolution level improves the accuracy of the solution, but the order of the basis function seems to be far less important. Our results indicate that wavelet transforms are an effective and accurate method which does not suffer from oscillations or numerical smearing in the presence of steep fronts

  19. Design and optimization of automotive thermoelectric generators for maximum fuel efficiency improvement

    International Nuclear Information System (INIS)

    Kempf, Nicholas; Zhang, Yanliang

    2016-01-01

    Highlights: • A three-dimensional automotive thermoelectric generator (TEG) model is developed. • Heat exchanger design and TEG configuration are optimized for maximum fuel efficiency increase. • Heat exchanger conductivity has a strong influence on maximum fuel efficiency increase. • TEG aspect ratio and fin height increase with heat exchanger thermal conductivity. • A 2.5% fuel efficiency increase is attainable with nanostructured half-Heusler modules. - Abstract: Automotive fuel efficiency can be increased by thermoelectric power generation using exhaust waste heat. A high-temperature thermoelectric generator (TEG) that converts engine exhaust waste heat into electricity is simulated based on a light-duty passenger vehicle with a 4-cylinder gasoline engine. Strategies to optimize TEG configuration and heat exchanger design for maximum fuel efficiency improvement are provided. Through comparison of stainless steel and silicon carbide heat exchangers, it is found that both the optimal TEG design and the maximum fuel efficiency increase are highly dependent on the thermal conductivity of the heat exchanger material. Significantly higher fuel efficiency increase can be obtained using silicon carbide heat exchangers at taller fins and a longer TEG along the exhaust flow direction when compared to stainless steel heat exchangers. Accounting for major parasitic losses, a maximum fuel efficiency increase of 2.5% is achievable using newly developed nanostructured bulk half-Heusler thermoelectric modules.

  20. Liquid velocity in upward and downward air-water flows

    International Nuclear Information System (INIS)

    Sun Xiaodong; Paranjape, Sidharth; Kim, Seungjin; Ozar, Basar; Ishii, Mamoru

    2004-01-01

    Local characteristics of the liquid phase in upward and downward air-water two-phase flows were experimentally investigated in a 50.8-mm inner-diameter round pipe. An integral laser Doppler anemometry (LDA) system was used to measure the axial liquid velocity and its fluctuations. No effect of the flow direction on the liquid velocity radial profile was observed in single-phase liquid benchmark experiments. Local multi-sensor conductivity probes were used to measure the radial profiles of the bubble velocity and the void fraction. The measurement results in the upward and downward two-phase flows are compared and discussed. The results in the downward flow demonstrated that the presence of the bubbles tended to flatten the liquid velocity radial profile, and the maximum liquid velocity could occur off the pipe centerline, in particular at relatively low flow rates. However, the maximum liquid velocity always occurred at the pipe center in the upward flow. Also, noticeable turbulence enhancement due to the bubbles in the two-phase flows was observed in the current experimental flow conditions. Furthermore, the distribution parameter and the void-weighted area-averaged drift velocity were obtained based on the definitions

  1. Structural state diagram of concentrated suspensions of jammed soft particles in oscillatory shear flow

    Science.gov (United States)

    Khabaz, Fardin; Cloitre, Michel; Bonnecaze, Roger T.

    2018-03-01

    In a recent study [Khabaz et al., Phys. Rev. Fluids 2, 093301 (2017), 10.1103/PhysRevFluids.2.093301], we showed that jammed soft particle glasses (SPGs) crystallize and order in steady shear flow. Here we investigate the rheology and microstructures of these suspensions in oscillatory shear flow using particle-dynamics simulations. The microstructures in both types of flows are similar, but their evolutions are very different. In both cases the monodisperse and polydisperse suspensions form crystalline and layered structures, respectively, at high shear rates. The crystals obtained in the oscillatory shear flow show fewer defects compared to those in the steady shear. SPGs remain glassy for maximum oscillatory strains less than about the yield strain of the material. For maximum strains greater than the yield strain, microstructural and rheological transitions occur for SPGs. Polydisperse SPGs rearrange into a layered structure parallel to the flow-vorticity plane for sufficiently high maximum shear rates and maximum strains about 10 times greater than the yield strain. Monodisperse suspensions form a face-centered cubic (FCC) structure when the maximum shear rate is low and hexagonal close-packed (HCP) structure when the maximum shear rate is high. In steady shear, the transition from a glassy state to a layered one for polydisperse suspensions included a significant induction strain before the transformation. In oscillatory shear, the transformation begins to occur immediately and with different microstructural changes. A state diagram for suspensions in large amplitude oscillatory shear flow is found to be in close but not exact agreement with the state diagram for steady shear flow. For more modest amplitudes of around one to five times the yield strain, there is a transition from a glassy structure to FCC and HCP crystals, at low and high frequencies, respectively, for monodisperse suspensions. At moderate frequencies, the transition is from glassy to HCP via

  2. Mathematical simulation of fluid flow and analysis of flow pattern in the flow path of low-head Kaplan turbine

    Directory of Open Access Journals (Sweden)

    A. V. Rusanov

    2016-12-01

    Full Text Available The results of numerical investigation of spatial flow of viscous incompressible fluid in flow part of Kaplan turbine PL20 Kremenchug HPP at optimum setting angle of runner blade φb = 15° and at maximum setting angle φb = 35° are shown. The flow simulation has been carried out on basis of numerical integration of the Reynolds equations with an additional term containing artificial compressibility. The differential two-parameter model of Menter (SST has been applied to take into account turbulent effects. Numerical integration of the equations is carried out using an implicit quasi-monotone Godunov type scheme of second - order accuracy in space and time. The calculations have been conducted with the help of the software system IPMFlow. The analysis of fluid flow in the flow part elements is shown and the values of hydraulic losses and local cavitation coefficient have been obtained. Comparison of calculated and experimental results has been carried out.

  3. Algorithms of maximum likelihood data clustering with applications

    Science.gov (United States)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  4. On the quirks of maximum parsimony and likelihood on phylogenetic networks.

    Science.gov (United States)

    Bryant, Christopher; Fischer, Mareike; Linz, Simone; Semple, Charles

    2017-03-21

    Maximum parsimony is one of the most frequently-discussed tree reconstruction methods in phylogenetic estimation. However, in recent years it has become more and more apparent that phylogenetic trees are often not sufficient to describe evolution accurately. For instance, processes like hybridization or lateral gene transfer that are commonplace in many groups of organisms and result in mosaic patterns of relationships cannot be represented by a single phylogenetic tree. This is why phylogenetic networks, which can display such events, are becoming of more and more interest in phylogenetic research. It is therefore necessary to extend concepts like maximum parsimony from phylogenetic trees to networks. Several suggestions for possible extensions can be found in recent literature, for instance the softwired and the hardwired parsimony concepts. In this paper, we analyze the so-called big parsimony problem under these two concepts, i.e. we investigate maximum parsimonious networks and analyze their properties. In particular, we show that finding a softwired maximum parsimony network is possible in polynomial time. We also show that the set of maximum parsimony networks for the hardwired definition always contains at least one phylogenetic tree. Lastly, we investigate some parallels of parsimony to different likelihood concepts on phylogenetic networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Optimal operating conditions for maximum biogas production in anaerobic bioreactors

    International Nuclear Information System (INIS)

    Balmant, W.; Oliveira, B.H.; Mitchell, D.A.; Vargas, J.V.C.; Ordonez, J.C.

    2014-01-01

    The objective of this paper is to demonstrate the existence of optimal residence time and substrate inlet mass flow rate for maximum methane production through numerical simulations performed with a general transient mathematical model of an anaerobic biodigester introduced in this study. It is herein suggested a simplified model with only the most important reaction steps which are carried out by a single type of microorganisms following Monod kinetics. The mathematical model was developed for a well mixed reactor (CSTR – Continuous Stirred-Tank Reactor), considering three main reaction steps: acidogenesis, with a μ max of 8.64 day −1 and a K S of 250 mg/L, acetogenesis, with a μ max of 2.64 day −1 and a K S of 32 mg/L, and methanogenesis, with a μ max of 1.392 day −1 and a K S of 100 mg/L. The yield coefficients were 0.1-g-dry-cells/g-pollymeric compound for acidogenesis, 0.1-g-dry-cells/g-propionic acid and 0.1-g-dry-cells/g-butyric acid for acetogenesis and 0.1 g-dry-cells/g-acetic acid for methanogenesis. The model describes both the transient and the steady-state regime for several different biodigester design and operating conditions. After model experimental validation, a parametric analysis was performed. It was found that biogas production is strongly dependent on the input polymeric substrate and fermentable monomer concentrations, but fairly independent of the input propionic, acetic and butyric acid concentrations. An optimisation study was then conducted and optimal residence time and substrate inlet mass flow rate were found for maximum methane production. The optima found were very sharp, showing a sudden drop of methane mass flow rate variation from the observed maximum to zero, within a 20% range around the optimal operating parameters, which stresses the importance of their identification, no matter how complex the actual bioreactor design may be. The model is therefore expected to be a useful tool for simulation, design, control and

  6. Maximum entropy method approach to the θ term

    International Nuclear Information System (INIS)

    Imachi, Masahiro; Shinno, Yasuhiko; Yoneyama, Hiroshi

    2004-01-01

    In Monte Carlo simulations of lattice field theory with a θ term, one confronts the complex weight problem, or the sign problem. This is circumvented by performing the Fourier transform of the topological charge distribution P(Q). This procedure, however, causes flattening phenomenon of the free energy f(θ), which makes study of the phase structure unfeasible. In order to treat this problem, we apply the maximum entropy method (MEM) to a Gaussian form of P(Q), which serves as a good example to test whether the MEM can be applied effectively to the θ term. We study the case with flattering as well as that without flattening. In the latter case, the results of the MEM agree with those obtained from the direct application of the Fourier transform. For the former, the MEM gives a smoother f(θ) than that of the Fourier transform. Among various default models investigated, the images which yield the least error do not show flattening, although some others cannot be excluded given the uncertainly related to statistical error. (author)

  7. Flow modelling of plant processes for fault diagnosis

    International Nuclear Information System (INIS)

    Praetorius, N.; Duncan, K.D.

    1989-01-01

    Flow and its interruption or degradation is seen by many people in industry to be the essential problem in fault diagnonsis. It is this observation which has motivated the representation of a complex simulation of a process plant presented here. The display system we have developed represents the mass and energy flow functions of the plant and the relationship between such flow functions. In this report we shall mainly discuss how such representation seems to provide opportunities to design alarm systems as an integral part of the flow function representation itself and to solve two of the most intricate problems in diagnosis, namely the problem of symptom referral and the problem of confuseable faults. (author)

  8. ethod of straight lines for a Bingham problem as a model for the flow of waxy crude oils

    Directory of Open Access Journals (Sweden)

    German Ariel Torres

    2005-11-01

    Full Text Available In this work, we develop a method of straight lines for solving a Bingham problem that models the flow of waxy crude oils. The model describes the flow of mineral oils with a high content of paraffin at temperatures below the cloud point (i.e. the crystallization temperature of paraffin and more specifically below the pour point at which the crystals aggregate themselves and the oil takes a jell-like structure. From the rheological point of view such a system can be modelled as a Bingham fluid whose parameters evolve according to the volume fractions of crystallized paraffin and the aggregation degree of crystals. We prove that the method is well defined for all times, a monotone property, qualitative behaviour of the solution, and a convergence theorem. The results are compared with numerical experiments at the end of this article.

  9. A non-traditional fluid problem: transition between theoretical models from Stokes’ to turbulent flow

    Science.gov (United States)

    Salomone, Horacio D.; Olivieri, Néstor A.; Véliz, Maximiliano E.; Raviola, Lisandro A.

    2018-05-01

    In the context of fluid mechanics courses, it is customary to consider the problem of a sphere falling under the action of gravity inside a viscous fluid. Under suitable assumptions, this phenomenon can be modelled using Stokes’ law and is routinely reproduced in teaching laboratories to determine terminal velocities and fluid viscosities. In many cases, however, the measured physical quantities show important deviations with respect to the predictions deduced from the simple Stokes’ model, and the causes of these apparent ‘anomalies’ (for example, whether the flow is laminar or turbulent) are seldom discussed in the classroom. On the other hand, there are various variable-mass problems that students tackle during elementary mechanics courses and which are discussed in many textbooks. In this work, we combine both kinds of problems and analyse—both theoretically and experimentally—the evolution of a system composed of a sphere pulled by a chain of variable length inside a tube filled with water. We investigate the effects of different forces acting on the system such as weight, buoyancy, viscous friction and drag force. By means of a sequence of mathematical models of increasing complexity, we obtain a progressive fit that accounts for the experimental data. The contrast between the various models exposes the strengths and weaknessess of each one. The proposed experience can be useful for integrating concepts of elementary mechanics and fluids, and is suitable as laboratory practice, stressing the importance of the experimental validation of theoretical models and showing the model-building processes in a didactic framework.

  10. Relationship between visual prostate score (VPSS and maximum flow rate (Qmax in men with urinary tract symptoms

    Directory of Open Access Journals (Sweden)

    Mazhar A. Memon

    2016-04-01

    Full Text Available ABSTRACT Objective: To evaluate correlation between visual prostate score (VPSS and maximum flow rate (Qmax in men with lower urinary tract symptoms. Material and Methods: This is a cross sectional study conducted at a university Hospital. Sixty-seven adult male patients>50 years of age were enrolled in the study after signing an informed consent. Qmax and voided volume recorded at uroflowmetry graph and at the same time VPSS were assessed. The education level was assessed in various defined groups. Pearson correlation coefficient was computed for VPSS and Qmax. Results: Mean age was 66.1±10.1 years (median 68. The mean voided volume on uroflowmetry was 268±160mL (median 208 and the mean Qmax was 9.6±4.96mLs/sec (median 9.0. The mean VPSS score was 11.4±2.72 (11.0. In the univariate linear regression analysis there was strong negative (Pearson's correlation between VPSS and Qmax (r=848, p<0.001. In the multiple linear regression analyses there was a significant correlation between VPSS and Qmax (β-http://www.blogapaixonadosporviagens.com.br/p/caribe.html after adjusting the effect of age, voided volume (V.V and level of education. Multiple linear regression analysis done for independent variables and results showed that there was no significant correlation between the VPSS and independent factors including age (p=0.27, LOE (p=0.941 and V.V (p=0.082. Conclusion: There is a significant negative correlation between VPSS and Qmax. The VPSS can be used in lieu of IPSS score. Men even with limited educational background can complete VPSS without assistance.

  11. Linear Time Local Approximation Algorithm for Maximum Stable Marriage

    Directory of Open Access Journals (Sweden)

    Zoltán Király

    2013-08-01

    Full Text Available We consider a two-sided market under incomplete preference lists with ties, where the goal is to find a maximum size stable matching. The problem is APX-hard, and a 3/2-approximation was given by McDermid [1]. This algorithm has a non-linear running time, and, more importantly needs global knowledge of all preference lists. We present a very natural, economically reasonable, local, linear time algorithm with the same ratio, using some ideas of Paluch [2]. In this algorithm every person make decisions using only their own list, and some information asked from members of these lists (as in the case of the famous algorithm of Gale and Shapley. Some consequences to the Hospitals/Residents problem are also discussed.

  12. Two-Agent Scheduling to Minimize the Maximum Cost with Position-Dependent Jobs

    Directory of Open Access Journals (Sweden)

    Long Wan

    2015-01-01

    Full Text Available This paper investigates a single-machine two-agent scheduling problem to minimize the maximum costs with position-dependent jobs. There are two agents, each with a set of independent jobs, competing to perform their jobs on a common machine. In our scheduling setting, the actual position-dependent processing time of one job is characterized by variable function dependent on the position of the job in the sequence. Each agent wants to fulfil the objective of minimizing the maximum cost of its own jobs. We develop a feasible method to achieve all the Pareto optimal points in polynomial time.

  13. Numerical studies of transverse curvature effects on transonic flow stability

    Science.gov (United States)

    Macaraeg, M. G.; Daudpota, Q. I.

    1992-01-01

    A numerical study of transverse curvature effects on compressible flow temporal stability for transonic to low supersonic Mach numbers is presented for axisymmetric modes. The mean flows studied include a similar boundary-layer profile and a nonsimilar axisymmetric boundary-layer solution. The effect of neglecting curvature in the mean flow produces only small quantitative changes in the disturbance growth rate. For transonic Mach numbers (1-1.4) and aerodynamically relevant Reynolds numbers (5000-10,000 based on displacement thickness), the maximum growth rate is found to increase with curvature - the maximum occurring at a nondimensional radius (based on displacement thickness) between 30 and 100.

  14. Maximum allowable heat flux for a submerged horizontal tube bundle

    International Nuclear Information System (INIS)

    McEligot, D.M.

    1995-01-01

    For application to industrial heating of large pools by immersed heat exchangers, the socalled maximum allowable (or open-quotes criticalclose quotes) heat flux is studied for unconfined tube bundles aligned horizontally in a pool without forced flow. In general, we are considering boiling after the pool reaches its saturation temperature rather than sub-cooled pool boiling which should occur during early stages of transient operation. A combination of literature review and simple approximate analysis has been used. To date our main conclusion is that estimates of q inch chf are highly uncertain for this configuration

  15. ARBITRARY INTERACTION OF PLANE SUPERSONIC FLOWS

    Directory of Open Access Journals (Sweden)

    P. V. Bulat

    2015-11-01

    Full Text Available Subject of study.We consider the Riemann problem for parameters at collision of two plane flows at a certain angle. The problem is solved in the exact statement. Most cases of interference, both stationary and non-stationary gas-dynamic discontinuities, followed by supersonic flows can be reduced to the problem of random interaction of two supersonic flows. Depending on the ratio of the parameters in the flows, outgoing discontinuities turn out to be shock waves, or rarefactionwaves. In some cases, there is no solution at all. It is important to know how to find the domain of existence for the relevant decisions, as the type of shock-wave structures in these domains is known in advance. The Riemann problem is used in numerical methods such as the method of Godunov. As a rule, approximate solution is used, known as the Osher solution, but for a number of problems with a high precision required, solution of this problem needs to be in the exact statement. Main results.Domains of existence for solutions with different types of shock-wave structure have been considered. Boundaries of existence for solutions with two outgoing shock waves are analytically defined, as well as with the outgoing shock wave and rarefaction wave. We identify the area of Mach numbers and angles at which the flows interact and there is no solution. Specific flows with two outgoing rarefaction waves are not considered. Practical significance. The results supplement interference theory of stationary gas-dynamic discontinuities and can be used to develop new methods of numerical calculation with extraction of discontinuities.

  16. Perspectives on Inmate Communication and Interpersonal Relations in the Maximum Security Prison.

    Science.gov (United States)

    Van Voorhis, Patricia; Meussling, Vonne

    In recent years, scholarly and applied inquiry has addressed the importance of interpersonal communication patterns and problems in maximum security institutions for males. As a result of this research, the number of programs designed to improve the interpersonal effectiveness of prison inmates has increased dramatically. Research suggests that…

  17. Scaling-Laws of Flow Entropy with Topological Metrics of Water Distribution Networks

    Directory of Open Access Journals (Sweden)

    Giovanni Francesco Santonastaso

    2018-01-01

    Full Text Available Robustness of water distribution networks is related to their connectivity and topological structure, which also affect their reliability. Flow entropy, based on Shannon’s informational entropy, has been proposed as a measure of network redundancy and adopted as a proxy of reliability in optimal network design procedures. In this paper, the scaling properties of flow entropy of water distribution networks with their size and other topological metrics are studied. To such aim, flow entropy, maximum flow entropy, link density and average path length have been evaluated for a set of 22 networks, both real and synthetic, with different size and topology. The obtained results led to identify suitable scaling laws of flow entropy and maximum flow entropy with water distribution network size, in the form of power–laws. The obtained relationships allow comparing the flow entropy of water distribution networks with different size, and provide an easy tool to define the maximum achievable entropy of a specific water distribution network. An example of application of the obtained relationships to the design of a water distribution network is provided, showing how, with a constrained multi-objective optimization procedure, a tradeoff between network cost and robustness is easily identified.

  18. On Howard's conjecture in heterogeneous shear flow problem

    Indian Academy of Sciences (India)

    M. Senthilkumar (Newgen Imaging) 1461 1996 Oct 15 13:05:22

    Department of Mathematics, H.P. University, Shimla 171 005, India. ∗. Sidharth Govt. Degree College, Nadaun, Dist. Hamirpur 177 033 ... in proving it in the case of the Garcia-type [3] flows wherein the basic velocity distribution has a point of ...

  19. Maximum entropy technique in the doublet structure analysis

    International Nuclear Information System (INIS)

    Belashev, B.Z.; Panebrattsev, Yu.A.; Shakhaliev, Eh.I.; Soroko, L.M.

    1998-01-01

    The Maximum Entropy Technique (MENT) for solution of the inverse problems is explained. The effective computer program for resolution of the nonlinear equations system encountered in the MENT has been developed and tested. The possibilities of the MENT have been demonstrated on the example of the MENT in the doublet structure analysis of noisy experimental data. The comparison of the MENT results with results of the Fourier algorithm technique without regularization is presented. The tolerant noise level is equal to 30% for MENT and only 0.1% for the Fourier algorithm

  20. Development of evaluation method on flow-induced vibration and corrosion of components in two-phase flow by coupled analysis. 1. Evaluation of effects of flow-induced vibration on structural material integrity

    International Nuclear Information System (INIS)

    Naitoh, Masanori; Uchida, Shunsuke; Koshizuka, Seiichi; Ninokata, Hisashi; Anahara, Naoki; Dosaki, Koji; Katono, Kenichi; Akiyama, Minoru; Saitoh, Hiroaki

    2007-01-01

    Problems in major components and structural materials in nuclear power plants have often been caused by flow induced vibration, corrosion and their overlapping effects. In order to establish safe and reliable plant operation, it is necessary to predict future problems for structural materials based on combined analyses of flow dynamics and corrosion and to mitigate them before they become serious issues for plant operation. An innovative method for flow induced vibration of structures in two phase flow by combined analyses of three dimensional flow dynamics and structures is to be introduced. (author)

  1. Numerical method for two-phase flow discontinuity propagation calculation

    International Nuclear Information System (INIS)

    Toumi, I.; Raymond, P.

    1989-01-01

    In this paper, we present a class of numerical shock-capturing schemes for hyperbolic systems of conservation laws modelling two-phase flow. First, we solve the Riemann problem for a two-phase flow with unequal velocities. Then, we construct two approximate Riemann solvers: an one intermediate-state Riemann solver and a generalized Roe's approximate Riemann solver. We give some numerical results for one-dimensional shock-tube problems and for a standard two-phase flow heat addition problem involving two-phase flow instabilities

  2. Parallel genetic algorithms with migration for the hybrid flow shop scheduling problem

    Directory of Open Access Journals (Sweden)

    K. Belkadi

    2006-01-01

    Full Text Available This paper addresses scheduling problems in hybrid flow shop-like systems with a migration parallel genetic algorithm (PGA_MIG. This parallel genetic algorithm model allows genetic diversity by the application of selection and reproduction mechanisms nearer to nature. The space structure of the population is modified by dividing it into disjoined subpopulations. From time to time, individuals are exchanged between the different subpopulations (migration. Influence of parameters and dedicated strategies are studied. These parameters are the number of independent subpopulations, the interconnection topology between subpopulations, the choice/replacement strategy of the migrant individuals, and the migration frequency. A comparison between the sequential and parallel version of genetic algorithm (GA is provided. This comparison relates to the quality of the solution and the execution time of the two versions. The efficiency of the parallel model highly depends on the parameters and especially on the migration frequency. In the same way this parallel model gives a significant improvement of computational time if it is implemented on a parallel architecture which offers an acceptable number of processors (as many processors as subpopulations.

  3. Air Flow and Gassing Potential in Micro-injection Moulding

    DEFF Research Database (Denmark)

    Griffithsa, C.A.; Dimova, S.S.; Scholz, S.

    2011-01-01

    valuable information about the process dynamics and also about the filling of a cavity by a polymer melt. In this paper, a novel experimental set-up is proposed to monitor maximum air flow and air flow work as an integral of the air flow over time by employing a MEMS gas sensor mounted inside the mould...

  4. A hybrid genetic algorithm for the distributed permutation flowshop scheduling problem

    Directory of Open Access Journals (Sweden)

    Jian Gao

    2011-08-01

    Full Text Available Distributed Permutation Flowshop Scheduling Problem (DPFSP is a newly proposed scheduling problem, which is a generalization of classical permutation flow shop scheduling problem. The DPFSP is NP-hard in general. It is in the early stages of studies on algorithms for solving this problem. In this paper, we propose a GA-based algorithm, denoted by GA_LS, for solving this problem with objective to minimize the maximum completion time. In the proposed GA_LS, crossover and mutation operators are designed to make it suitable for the representation of DPFSP solutions, where the set of partial job sequences is employed. Furthermore, GA_LS utilizes an efficient local search method to explore neighboring solutions. The local search method uses three proposed rules that move jobs within a factory or between two factories. Intensive experiments on the benchmark instances, extended from Taillard instances, are carried out. The results indicate that the proposed hybrid genetic algorithm can obtain better solutions than all the existing algorithms for the DPFSP, since it obtains better relative percentage deviation and differences of the results are also statistically significant. It is also seen that best-known solutions for most instances are updated by our algorithm. Moreover, we also show the efficiency of the GA_LS by comparing with similar genetic algorithms with the existing local search methods.

  5. Directional Overcurrent Relays Coordination Problems in Distributed Generation Systems

    Directory of Open Access Journals (Sweden)

    Jakub Ehrenberger

    2017-09-01

    Full Text Available This paper proposes a new approach to the distributed generation system protection coordination based on directional overcurrent protections with inverse-time characteristics. The key question of protection coordination is the determination of correct values of all inverse-time characteristics coefficients. The coefficients must be correctly chosen considering the sufficiently short tripping times and the sufficiently long selectivity times. In the paper a new approach to protection coordination is designed, in which not only some, but all the required types of short-circuit contributions are taken into account. In radial systems, if the pickup currents are correctly chosen, protection coordination for maximum contributions is enough to ensure selectivity times for all the required short-circuit types. In distributed generation systems, due to different contributions flowing through the primary and selective protections, coordination for maximum contributions is not enough, but all the short-circuit types must be taken into account, and the protection coordination becomes a complex problem. A possible solution to the problem, based on an appropriately designed optimization, has been proposed in the paper. By repeating a simple optimization considering only one short-circuit type, the protection coordination considering all the required short-circuit types has been achieved. To show the importance of considering all the types of short-circuit contributions, setting optimizations with one (the highest and all the types of short-circuit contributions have been performed. Finally, selectivity time values are explored throughout the entire protected section, and both the settings are compared.

  6. Joint Model and Parameter Dimension Reduction for Bayesian Inversion Applied to an Ice Sheet Flow Problem

    Science.gov (United States)

    Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.

    2016-12-01

    Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete

  7. 非对称和不定椭圆问题的有限体积元方法的最大模估计%Maximum Norm Estimates for Finite Volume Element Method for Non-selfadjoint and Indefinite Elliptic Problems

    Institute of Scientific and Technical Information of China (English)

    毕春加

    2005-01-01

    In this paper, we establish the maximum norm estimates of the solutions of the finite volume element method (FVE) based on the P1 conforming element for the non-selfadjoint and indefinite elliptic problems.

  8. End-of-life flows of multiple cycle consumer products

    International Nuclear Information System (INIS)

    Tsiliyannis, C.A.

    2011-01-01

    Explicit expressions for the end-of-life flows (EOL) of single and multiple cycle products (MCPs) are presented, including deterministic and stochastic EOL exit. The expressions are given in terms of the physical parameters (maximum lifetime, T, annual cycling frequency, f, number of cycles, N, and early discard or usage loss). EOL flows are also obtained for hi-tech products, which are rapidly renewed and thus may not attain steady state (e.g. electronic products, passenger cars). A ten-step recursive procedure for obtaining the dynamic EOL flow evolution is proposed. Applications of the EOL expressions and the ten-step procedure are given for electric household appliances, industrial machinery, tyres, vehicles and buildings, both for deterministic and stochastic EOL exit, (normal, Weibull and uniform exit distributions). The effect of the physical parameters and the stochastic characteristics on the EOL flow is investigated in the examples: it is shown that the EOL flow profile is determined primarily by the early discard dynamics; it also depends strongly on longevity and cycling frequency: higher lifetime or early discard/loss imply lower dynamic and steady state EOL flows. The stochastic exit shapes the overall EOL dynamic profile: Under symmetric EOL exit distribution, as the variance of the distribution increases (uniform to normal to deterministic) the initial EOL flow rise becomes steeper but the steady state or maximum EOL flow level is lower. The steepest EOL flow profile, featuring the highest steady state or maximum level, as well, corresponds to skew, earlier shifted EOL exit (e.g. Weibull). Since the EOL flow of returned products consists the sink of the reuse/remanufacturing cycle (sink to recycle) the results may be used in closed loop product lifecycle management operations for scheduling and sizing reverse manufacturing and for planning recycle logistics. Decoupling and quantification of both the full age EOL and of the early discard flows is

  9. Flow rate analysis of wastewater inside reactor tanks on tofu wastewater treatment plant

    Science.gov (United States)

    Mamat; Sintawardani, N.; Astuti, J. T.; Nilawati, D.; Wulan, D. R.; Muchlis; Sriwuryandari, L.; Sembiring, T.; Jern, N. W.

    2017-03-01

    The research aimed to analyse the flow rate of the wastewater inside reactor tanks which were placed a number of bamboo cutting. The resistance of wastewater flow inside reactor tanks might not be occurred and produce biogas fuel optimally. Wastewater from eleven tofu factories was treated by multi-stages anaerobic process to reduce its organic pollutant and produce biogas. Biogas plant has six reactor tanks of which its capacity for waste water and gas dome was 18 m3 and 4.5 m3, respectively. Wastewater was pumped from collecting ponds to reactors by either serial or parallel way. Maximum pump capacity, head, and electrical motor power was 5m3/h, 50m, and 0.75HP, consecutively. Maximum pressure of biogas inside the reactor tanks was 55 mbar higher than atmosphere pressure. A number of 1,400 pieces of cutting bamboo at 50-60 mm diameter and 100 mm length were used as bacteria growth media inside each reactor tank, covering around 14,287 m2 bamboo area, and cross section area of inner reactor was 4,9 m2. In each reactor, a 6 inches PVC pipe was installed vertically as channel. When channels inside reactor were opened, flow rate of wastewater was 6x10-1 L.sec-1. Contrary, when channels were closed on the upper part, wastewater flow inside the first reactor affected and increased gas dome. Initially, wastewater flowed into each reactor by a gravity mode with head difference between the second and third reactor was 15x10-2m. However, head loss at the second reactor was equal to the third reactor by 8,422 x 10-4m. As result, wastewater flow at the second and third reactors were stagnant. To overcome the problem pump in each reactor should be installed in serial mode. In order to reach the output from the first reactor and the others would be equal, and biogas space was not filled by wastewater, therefore biogas production will be optimum.

  10. Autogenic dynamics of debris-flow fans

    Science.gov (United States)

    van den Berg, Wilco; de Haas, Tjalling; Braat, Lisanne; Kleinhans, Maarten

    2015-04-01

    Alluvial fans develop their semi-conical shape by cyclic avulsion of their geomorphologically active sector from a fixed fan apex. These cyclic avulsions have been attributed to both allogenic and autogenic forcings and processes. Autogenic dynamics have been extensively studied on fluvial fans through physical scale experiments, and are governed by cyclic alternations of aggradation by unconfined sheet flow, fanhead incision leading to channelized flow, channel backfilling and avulsion. On debris-flow fans, however, autogenic dynamics have not yet been directly observed. We experimentally created debris-flow fans under constant extrinsic forcings, and show that autogenic dynamics are a fundamental intrinsic process on debris-flow fans. We found that autogenic cycles on debris-flow fans are driven by sequences of backfilling, avulsion and channelization, similar to the cycles on fluvial fans. However, the processes that govern these sequences are unique for debris-flow fans, and differ fundamentally from the processes that govern autogenic dynamics on fluvial fans. We experimentally observed that backfilling commenced after the debris flows reached their maximum possible extent. The next debris flows then progressively became shorter, driven by feedbacks on fan morphology and flow-dynamics. The progressively decreasing debris-flow length caused in-channel sedimentation, which led to increasing channel overflow and wider debris flows. This reduced the impulse of the liquefied flow body to the flow front, which then further reduced flow velocity and runout length, and induced further in-channel sedimentation. This commenced a positive feedback wherein debris flows became increasingly short and wide, until the channel was completely filled and the apex cross-profile was plano-convex. At this point, there was no preferential transport direction by channelization, and the debris flows progressively avulsed towards the steepest, preferential, flow path. Simultaneously

  11. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  12. Measurements of local two-phase flow parameters in a boiling flow channel

    International Nuclear Information System (INIS)

    Yun, Byong Jo; Park, Goon-CherI; Chung, Moon Ki; Song, Chul Hwa

    1998-01-01

    Local two-phase flow parameters were measured lo investigate the internal flow structures of steam-water boiling flow in an annulus channel. Two kinds of measuring methods for local two-phase flow parameters were investigated. These are a two-conductivity probe for local vapor parameters and a Pitot cube for local liquid parameters. Using these probes, the local distribution of phasic velocities, interfacial area concentration (IAC) and void fraction is measured. In this study, the maximum local void fraction in subcooled boiling condition is observed around the heating rod and the local void fraction is smoothly decreased from the surface of a heating rod to the channel center without any wall void peaking, which was observed in air-water experiments. The distributions of local IAC and bubble frequency coincide with those of local void fraction for a given area-averaged void fraction. (author)

  13. Gaseous slip flow analysis of a micromachined flow sensor for ultra small flow applications

    Science.gov (United States)

    Jang, Jaesung; Wereley, Steven T.

    2007-02-01

    The velocity slip of a fluid at a wall is one of the most typical phenomena in microscale gas flows. This paper presents a flow analysis considering the velocity slip in a capacitive micro gas flow sensor based on pressure difference measurements along a microchannel. The tangential momentum accommodation coefficient (TMAC) measurements of a particular channel wall in planar microchannels will be presented while the previous micro gas flow studies have been based on the same TMACs on both walls. The sensors consist of a pair of capacitive pressure sensors, inlet/outlet and a microchannel. The main microchannel is 128.0 µm wide, 4.64 µm deep and 5680 µm long, and operated under nearly atmospheric conditions where the outlet Knudsen number is 0.0137. The sensor was fabricated using silicon wet etching, ultrasonic drilling, deep reactive ion etching (DRIE) and anodic bonding. The capacitance change of the sensor and the mass flow rate of nitrogen were measured as the inlet-to-outlet pressure ratio was varied from 1.00 to 1.24. The measured maximum mass flow rate was 3.86 × 10-10 kg s-1 (0.019 sccm) at the highest pressure ratio tested. As the pressure difference increased, both the capacitance of the differential pressure sensor and the flow rate through the main microchannel increased. The laminar friction constant f sdot Re, an important consideration in sensor design, varied from the incompressible no-slip case and the mass sensitivity and resolution of this sensor were discussed. Using the current slip flow formulae, a microchannel with much smaller mass flow rates can be designed at the same pressure ratios.

  14. PACTOLUS, Nuclear Power Plant Cost and Economics by Discounted Cash Flow Method. CLOTHO, Mass Flow Data Calculation for Program PACTOLUS

    International Nuclear Information System (INIS)

    Haffner, D.R.

    1976-01-01

    1 - Description of problem or function: PACTOLUS is a code for computing nuclear power costs using the discounted cash flow method. The cash flows are generated from input unit costs, time schedules and burnup data. CLOTHO calculates and communicates to PACTOLUS mass flow data to match a specified load factor history. 2 - Method of solution: Plant lifetime power costs are calculated using the discounted cash flow method. 3 - Restrictions on the complexity of the problem - Maxima of: 40 annual time periods into which all costs and mass flows are accumulated, 20 isotopic mass flows charged into and discharged from the reactor model

  15. Maximum entropy restoration of laser fusion target x-ray photographs

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.

    1976-01-01

    Maximum entropy principles were used to analyze the microdensitometer traces of a laser-fusion target photograph. The object is a glowing laser-fusion target microsphere 0.95 cm from a pinhole of radius 2 x 10 -4 cm, the image is 7.2 cm from the pinhole and the photon wavelength is likely to be 6.2 x 10 -8 cm. Some computational aspects of the problem are also considered

  16. Singularities in Free Surface Flows

    Science.gov (United States)

    Thete, Sumeet Suresh

    Free surface flows where the shape of the interface separating two or more phases or liquids are unknown apriori, are commonplace in industrial applications and nature. Distribution of drop sizes, coalescence rate of drops, and the behavior of thin liquid films are crucial to understanding and enhancing industrial practices such as ink-jet printing, spraying, separations of chemicals, and coating flows. When a contiguous mass of liquid such as a drop, filament or a film undergoes breakup to give rise to multiple masses, the topological transition is accompanied with a finite-time singularity . Such singularity also arises when two or more masses of liquid merge into each other or coalesce. Thus the dynamics close to singularity determines the fate of about-to-form drops or films and applications they are involved in, and therefore needs to be analyzed precisely. The primary goal of this thesis is to resolve and analyze the dynamics close to singularity when free surface flows experience a topological transition, using a combination of theory, experiments, and numerical simulations. The first problem under consideration focuses on the dynamics following flow shut-off in bottle filling applications that are relevant to pharmaceutical and consumer products industry, using numerical techniques based on Galerkin Finite Element Methods (GFEM). The second problem addresses the dual flow behavior of aqueous foams that are observed in oil and gas fields and estimates the relevant parameters that describe such flows through a series of experiments. The third problem aims at understanding the drop formation of Newtonian and Carreau fluids, computationally using GFEM. The drops are formed as a result of imposed flow rates or expanding bubbles similar to those of piezo actuated and thermal ink-jet nozzles. The focus of fourth problem is on the evolution of thinning threads of Newtonian fluids and suspensions towards singularity, using computations based on GFEM and experimental

  17. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM+-up scheme

    International Nuclear Information System (INIS)

    Chang, Chih-Hao; Liou, Meng-Sing

    2007-01-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM + -up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion

  18. What is the relationship between free flow and pressure flow studies in women?

    Science.gov (United States)

    Duckett, Jonathan; Cheema, Katherine; Patil, Avanti; Basu, Maya; Beale, Sian; Wise, Brian

    2013-03-01

    The relationship between free flow (FFS) and pressure flow (PFS) voiding studies remains uncertain and the effect of a urethral catheter on flow rates has not been determined. The relationship between residuals obtained at FF and PFS has yet to be established. This was a prospective cohort study based on 474 consecutive women undergoing cystometry using different sized urethral catheters at different centres. FFS and PFS data were compared for different conditions and the relationship of residuals analysed for FFS and PFS. The null hypothesis was that urethral catheters do not produce an alteration in maximum flow rates for PFS and FF studies. Urethral catheterisation results in lower flow rates (p flows are corrected for voided volume (p flow rates are lower in women with DO than USI (p flow rates and vice versa. There was no significant difference between the mean residuals of the two groups (FFS vs PFS-two-tailed t = 0.54, p = 0.59). Positive residuals in FFS showed a good association with positive residuals in the PFS (r = 0.53, p flow rates. The relationship can be compared mathematically. The null hypothesis can be rejected.

  19. An efficient genetic algorithm for a hybrid flow shop scheduling problem with time lags and sequence-dependent setup time

    Directory of Open Access Journals (Sweden)

    Farahmand-Mehr Mohammad

    2014-01-01

    Full Text Available In this paper, a hybrid flow shop scheduling problem with a new approach considering time lags and sequence-dependent setup time in realistic situations is presented. Since few works have been implemented in this field, the necessity of finding better solutions is a motivation to extend heuristic or meta-heuristic algorithms. This type of production system is found in industries such as food processing, chemical, textile, metallurgical, printed circuit board, and automobile manufacturing. A mixed integer linear programming (MILP model is proposed to minimize the makespan. Since this problem is known as NP-Hard class, a meta-heuristic algorithm, named Genetic Algorithm (GA, and three heuristic algorithms (Johnson, SPTCH and Palmer are proposed. Numerical experiments of different sizes are implemented to evaluate the performance of presented mathematical programming model and the designed GA in compare to heuristic algorithms and a benchmark algorithm. Computational results indicate that the designed GA can produce near optimal solutions in a short computational time for different size problems.

  20. A Singular Perturbation Problem for Steady State Conversion of Methane Oxidation in Reverse Flow Reactor

    Directory of Open Access Journals (Sweden)

    Aang Nuryaman

    2012-11-01

    Full Text Available The governing equations describing the methane oxidation process in reverse flow reactor are given by a set of convective-diffusion equations with a nonlinear reaction term, where temperature and methane conversion are dependent variables. In this study, the process is assumed to be one-dimensional pseudo homogeneous model and takes place with a certain reaction rate in which the whole process of reactor is still workable. Thus, the reaction rate can proceed at a fixed temperature. Under this condition, we restrict ourselves to solve the equations for the conversion only. From the available data, it turns out that the ratio of the diffusion term to the reaction term is small. Hence, this ratio is considered as small parameter in our model and this leads to a singular perturbation problem. In the vicinity of small parameter in front of higher order term, the numerical difficulties will be found. Here, we present an analytical solution by means of matched asymptotic expansions. Result shows that, up to and including the first order of approximation, the solution is in agreement with the exact and numerical solutions of the boundary value problem.