WorldWideScience

Sample records for sample problems flow

  1. Sample problem calculations related to two-phase flow transients in a PWR relief-piping network

    International Nuclear Information System (INIS)

    Shin, Y.W.; Wiedermann, A.H.

    1981-03-01

    Two sample problems related with the fast transients of water/steam flow in the relief line of a PWR pressurizer were calculated with a network-flow analysis computer code STAC (System Transient-Flow Analysis Code). The sample problems were supplied by EPRI and are designed to test computer codes or computational methods to determine whether they have the basic capability to handle the important flow features present in a typical relief line of a PWR pressurizer. It was found necessary to implement into the STAC code a number of additional boundary conditions in order to calculate the sample problems. This includes the dynamics of the fluid interface that is treated as a moving boundary. This report describes the methodologies adopted for handling the newly implemented boundary conditions and the computational results of the two sample problems. In order to demonstrate the accuracies achieved in the STAC code results, analytical solutions are also obtained and used as a basis for comparison

  2. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems

  3. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  4. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  5. ITOUGH2 sample problems

    International Nuclear Information System (INIS)

    Finsterle, S.

    1997-11-01

    This report contains a collection of ITOUGH2 sample problems. It complements the ITOUGH2 User's Guide [Finsterle, 1997a], and the ITOUGH2 Command Reference [Finsterle, 1997b]. ITOUGH2 is a program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis. It is based on the TOUGH2 simulator for non-isothermal multiphase flow in fractured and porous media [Preuss, 1987, 1991a]. The report ITOUGH2 User's Guide [Finsterle, 1997a] describes the inverse modeling framework and provides the theoretical background. The report ITOUGH2 Command Reference [Finsterle, 1997b] contains the syntax of all ITOUGH2 commands. This report describes a variety of sample problems solved by ITOUGH2. Table 1.1 contains a short description of the seven sample problems discussed in this report. The TOUGH2 equation-of-state (EOS) module that needs to be linked to ITOUGH2 is also indicated. Each sample problem focuses on a few selected issues shown in Table 1.2. ITOUGH2 input features and the usage of program options are described. Furthermore, interpretations of selected inverse modeling results are given. Problem 1 is a multipart tutorial, describing basic ITOUGH2 input files for the main ITOUGH2 application modes; no interpretation of results is given. Problem 2 focuses on non-uniqueness, residual analysis, and correlation structure. Problem 3 illustrates a variety of parameter and observation types, and describes parameter selection strategies. Problem 4 compares the performance of minimization algorithms and discusses model identification. Problem 5 explains how to set up a combined inversion of steady-state and transient data. Problem 6 provides a detailed residual and error analysis. Finally, Problem 7 illustrates how the estimation of model-related parameters may help compensate for errors in that model

  6. Calculation of sample problems related to two-phase flow blowdown transients in pressure relief piping of a PWR pressurizer

    International Nuclear Information System (INIS)

    Shin, Y.W.; Wiedermann, A.H.

    1984-02-01

    A method was published, based on the integral method of characteristics, by which the junction and boundary conditions needed in computation of a flow in a piping network can be accurately formulated. The method for the junction and boundary conditions formulation together with the two-step Lax-Wendroff scheme are used in a computer program; the program in turn, is used here in calculating sample problems related to the blowdown transient of a two-phase flow in the piping network downstream of a PWR pressurizer. Independent, nearly exact analytical solutions also are obtained for the sample problems. Comparison of the results obtained by the hybrid numerical technique with the analytical solutions showed generally good agreement. The good numerical accuracy shown by the results of our scheme suggest that the hybrid numerical technique is suitable for both benchmark and design calculations of PWR pressurizer blowdown transients

  7. Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems

    Science.gov (United States)

    Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros

    2015-04-01

    In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the

  8. 40 CFR 1065.245 - Sample flow meter for batch sampling.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Sample flow meter for batch sampling... Sample flow meter for batch sampling. (a) Application. Use a sample flow meter to determine sample flow... difference between a diluted exhaust sample flow meter and a dilution air meter to calculate raw exhaust flow...

  9. Topology optimization of flow problems

    DEFF Research Database (Denmark)

    Gersborg, Allan Roulund

    2007-01-01

    This thesis investigates how to apply topology optimization using the material distribution technique to steady-state viscous incompressible flow problems. The target design applications are fluid devices that are optimized with respect to minimizing the energy loss, characteristic properties...... transport in 2D Stokes flow. Using Stokes flow limits the range of applications; nonetheless, the thesis gives a proof-of-concept for the application of the method within fluid dynamic problems and it remains of interest for the design of microfluidic devices. Furthermore, the thesis contributes...... at the Technical University of Denmark. Large topology optimization problems with 2D and 3D Stokes flow modeling are solved with direct and iterative strategies employing the parallelized Sun Performance Library and the OpenMP parallelization technique, respectively....

  10. Topology optimization of Channel flow problems

    DEFF Research Database (Denmark)

    Gersborg-Hansen, Allan; Sigmund, Ole; Haber, R. B.

    2005-01-01

    function which measures either some local aspect of the velocity field or a global quantity, such as the rate of energy dissipation. We use the finite element method to model the flow, and we solve the optimization problem with a gradient-based math-programming algorithm that is driven by analytical......This paper describes a topology design method for simple two-dimensional flow problems. We consider steady, incompressible laminar viscous flows at low to moderate Reynolds numbers. This makes the flow problem non-linear and hence a non-trivial extension of the work of [Borrvall&Petersson 2002......]. Further, the inclusion of inertia effects significantly alters the physics, enabling solutions of new classes of optimization problems, such as velocity--driven switches, that are not addressed by the earlier method. Specifically, we determine optimal layouts of channel flows that extremize a cost...

  11. A Hybrid Metaheuristic Approach for Minimizing the Total Flow Time in A Flow Shop Sequence Dependent Group Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Antonio Costa

    2014-07-01

    Full Text Available Production processes in Cellular Manufacturing Systems (CMS often involve groups of parts sharing the same technological requirements in terms of tooling and setup. The issue of scheduling such parts through a flow-shop production layout is known as the Flow-Shop Group Scheduling (FSGS problem or, whether setup times are sequence-dependent, the Flow-Shop Sequence-Dependent Group Scheduling (FSDGS problem. This paper addresses the FSDGS issue, proposing a hybrid metaheuristic procedure integrating features from Genetic Algorithms (GAs and Biased Random Sampling (BRS search techniques with the aim of minimizing the total flow time, i.e., the sum of completion times of all jobs. A well-known benchmark of test cases, entailing problems with two, three, and six machines, is employed for both tuning the relevant parameters of the developed procedure and assessing its performances against two metaheuristic algorithms recently presented by literature. The obtained results and a properly arranged ANOVA analysis highlight the superiority of the proposed approach in tackling the scheduling problem under investigation.

  12. Inverse feasibility problems of the inverse maximum flow problems

    Indian Academy of Sciences (India)

    199–209. c Indian Academy of Sciences. Inverse feasibility problems of the inverse maximum flow problems. ADRIAN DEACONU. ∗ and ELEONOR CIUREA. Department of Mathematics and Computer Science, Faculty of Mathematics and Informatics, Transilvania University of Brasov, Brasov, Iuliu Maniu st. 50,. Romania.

  13. Approximation algorithms for the parallel flow shop problem

    NARCIS (Netherlands)

    X. Zhang (Xiandong); S.L. van de Velde (Steef)

    2012-01-01

    textabstractWe consider the NP-hard problem of scheduling n jobs in m two-stage parallel flow shops so as to minimize the makespan. This problem decomposes into two subproblems: assigning the jobs to parallel flow shops; and scheduling the jobs assigned to the same flow shop by use of Johnson's

  14. 3D Topology optimization of Stokes flow problems

    DEFF Research Database (Denmark)

    Gersborg-Hansen, Allan; Dammann, Bernd

    of energy efficient devices for 2D Stokes flow. Creeping flow problems are described by the Stokes equations which model very viscous fluids at macro scales or ordinary fluids at very small scales. The latter gives the motivation for topology optimization problems based on the Stokes equations being a model......The present talk is concerned with the application of topology optimization to creeping flow problems in 3D. This research is driven by the fact that topology optimization has proven very successful as a tool in academic and industrial design problems. Success stories are reported from such diverse...

  15. Problems in fluid flow

    International Nuclear Information System (INIS)

    Brasch, D.J.

    1986-01-01

    Chemical and mineral engineering students require texts which give guidance to problem solving to complement their main theoretical texts. This book has a broad coverage of the fluid flow problems which these students may encounter. The fundamental concepts and the application of the behaviour of liquids and gases in unit operation are dealt with. The book is intended to give numerical practice; development of theory is undertaken only when elaboration of treatments available in theoretical texts is absolutely necessary

  16. Amphiphilic mediated sample preparation for micro-flow cytometry

    Science.gov (United States)

    Clague, David S [Livermore, CA; Wheeler, Elizabeth K [Livermore, CA; Lee, Abraham P [Irvine, CA

    2009-03-17

    A flow cytometer includes a flow cell for detecting the sample, an oil phase in the flow cell, a water phase in the flow cell, an oil-water interface between the oil phase and the water phase, a detector for detecting the sample at the oil-water interface, and a hydrophobic unit operatively connected to the sample. The hydrophobic unit is attached to the sample. The sample and the hydrophobic unit are placed in an oil and water combination. The sample is detected at the interface between the oil phase and the water phase.

  17. High throughput analysis of samples in flowing liquid

    Energy Technology Data Exchange (ETDEWEB)

    Ambrose, W. Patrick (Los Alamos, NM); Grace, W. Kevin (Los Alamos, NM); Goodwin, Peter M. (Los Alamos, NM); Jett, James H. (Los Alamos, NM); Orden, Alan Van (Fort Collins, CO); Keller, Richard A. (White Rock, NM)

    2001-01-01

    Apparatus and method enable imaging multiple fluorescent sample particles in a single flow channel. A flow channel defines a flow direction for samples in a flow stream and has a viewing plane perpendicular to the flow direction. A laser beam is formed as a ribbon having a width effective to cover the viewing plane. Imaging optics are arranged to view the viewing plane to form an image of the fluorescent sample particles in the flow stream, and a camera records the image formed by the imaging optics.

  18. Sampling device for withdrawing a representative sample from single and multi-phase flows

    Science.gov (United States)

    Apley, Walter J.; Cliff, William C.; Creer, James M.

    1984-01-01

    A fluid stream sampling device has been developed for the purpose of obtaining a representative sample from a single or multi-phase fluid flow. This objective is carried out by means of a probe which may be inserted into the fluid stream. Individual samples are withdrawn from the fluid flow by sampling ports with particular spacings, and the sampling parts are coupled to various analytical systems for characterization of the physical, thermal, and chemical properties of the fluid flow as a whole and also individually.

  19. Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem

    Science.gov (United States)

    Man, J.; Li, W.; Zeng, L.; Wu, L.

    2015-12-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.

  20. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.

    2013-12-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.

  1. A note on Fenchel cuts for the single-node flow problem

    DEFF Research Database (Denmark)

    Klose, Andreas

    The single-node flow problem, which is also known as the single-sink fixed-charge transportation problem, consists in finding a minimum cost flow from a number of nodes to a single sink. The flow cost comprise an amount proportional to the quantity shipped as well as a fixed charge. In this note......, some structural properties of Fenchel cutting planes for this problem are described. Such cuts might then be applied for solving, e.g., fixed-charge transportation problems and more general fixed-charge network flow problems....

  2. Dynamic Flow Management Problems in Air Transportation

    Science.gov (United States)

    Patterson, Sarah Stock

    1997-01-01

    In 1995, over six hundred thousand licensed pilots flew nearly thirty-five million flights into over eighteen thousand U.S. airports, logging more than 519 billion passenger miles. Since demand for air travel has increased by more than 50% in the last decade while capacity has stagnated, congestion is a problem of undeniable practical significance. In this thesis, we will develop optimization techniques that reduce the impact of congestion on the national airspace. We start by determining the optimal release times for flights into the airspace and the optimal speed adjustment while airborne taking into account the capacitated airspace. This is called the Air Traffic Flow Management Problem (TFMP). We address the complexity, showing that it is NP-hard. We build an integer programming formulation that is quite strong as some of the proposed inequalities are facet defining for the convex hull of solutions. For practical problems, the solutions of the LP relaxation of the TFMP are very often integral. In essence, we reduce the problem to efficiently solving large scale linear programming problems. Thus, the computation times are reasonably small for large scale, practical problems involving thousands of flights. Next, we address the problem of determining how to reroute aircraft in the airspace system when faced with dynamically changing weather conditions. This is called the Air Traffic Flow Management Rerouting Problem (TFMRP) We present an integrated mathematical programming approach for the TFMRP, which utilizes several methodologies, in order to minimize delay costs. In order to address the high dimensionality, we present an aggregate model, in which we formulate the TFMRP as a multicommodity, integer, dynamic network flow problem with certain side constraints. Using Lagrangian relaxation, we generate aggregate flows that are decomposed into a collection of flight paths using a randomized rounding heuristic. This collection of paths is used in a packing integer

  3. Topology Optimization of Large Scale Stokes Flow Problems

    DEFF Research Database (Denmark)

    Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan

    2008-01-01

    This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....

  4. Advances in multiphase flow and related problems

    International Nuclear Information System (INIS)

    Papanicolaou, G.

    1986-01-01

    Proceedings of a workshop in multiphase flow held at Leesburg, Va. in June 1986 representing a cross-disciplinary approach to theoretical as well as computational problems in multiphase flow. Topics include composites, phase transitions, fluid-particle systems, and bubbly liquids

  5. Generalized Riemann problem for reactive flows

    International Nuclear Information System (INIS)

    Ben-Artzi, M.

    1989-01-01

    A generalized Riemann problem is introduced for the equations of reactive non-viscous compressible flow in one space dimension. Initial data are assumed to be linearly distributed on both sides of a jump discontinuity. The resolution of the singularity is studied and the first-order variation (in time) of flow variables is given in exact form. copyright 1989 Academic Press, Inc

  6. Heuristics for no-wait flow shop scheduling problem

    Directory of Open Access Journals (Sweden)

    Kewal Krishan Nailwal

    2016-09-01

    Full Text Available No-wait flow shop scheduling refers to continuous flow of jobs through different machines. The job once started should have the continuous processing through the machines without wait. This situation occurs when there is a lack of an intermediate storage between the processing of jobs on two consecutive machines. The problem of no-wait with the objective of minimizing makespan in flow shop scheduling is NP-hard; therefore the heuristic algorithms are the key to solve the problem with optimal solution or to approach nearer to optimal solution in simple manner. The paper describes two heuristics, one constructive and an improvement heuristic algorithm obtained by modifying the constructive one for sequencing n-jobs through m-machines in a flow shop under no-wait constraint with the objective of minimizing makespan. The efficiency of the proposed heuristic algorithms is tested on 120 Taillard’s benchmark problems found in the literature against the NEH under no-wait and the MNEH heuristic for no-wait flow shop problem. The improvement heuristic outperforms all heuristics on the Taillard’s instances by improving the results of NEH by 27.85%, MNEH by 22.56% and that of the proposed constructive heuristic algorithm by 24.68%. To explain the computational process of the proposed algorithm, numerical illustrations are also given in the paper. Statistical tests of significance are done in order to draw the conclusions.

  7. Scalable Newton-Krylov solver for very large power flow problems

    NARCIS (Netherlands)

    Idema, R.; Lahaye, D.J.P.; Vuik, C.; Van der Sluis, L.

    2010-01-01

    The power flow problem is generally solved by the Newton-Raphson method with a sparse direct solver for the linear system of equations in each iteration. While this works fine for small power flow problems, we will show that for very large problems the direct solver is very slow and we present

  8. Sampling problems for randomly broken sticks

    Energy Technology Data Exchange (ETDEWEB)

    Huillet, Thierry [Laboratoire de Physique Theorique et Modelisation, CNRS-UMR 8089 et Universite de Cergy-Pontoise, 5 mail Gay-Lussac, 95031, Neuville sur Oise (France)

    2003-04-11

    Consider the random partitioning model of a population (represented by a stick of length 1) into n species (fragments) with identically distributed random weights (sizes). Upon ranking the fragments' weights according to ascending sizes, let S{sub m:n} be the size of the mth smallest fragment. Assume that some observer is sampling such populations as follows: drop at random k points (the sample size) onto this stick and record the corresponding numbers of visited fragments. We shall investigate the following sampling problems: (1) what is the sample size if the sampling is carried out until the first visit of the smallest fragment (size S{sub 1:n})? (2) For a given sample size, have all the fragments of the stick been visited at least once or not? This question is related to Feller's random coupon collector problem. (3) In what order are new fragments being discovered and what is the random number of samples separating the discovery of consecutive new fragments until exhaustion of the list? For this problem, the distribution of the size-biased permutation of the species' weights, as the sequence of their weights in their order of appearance is needed and studied.

  9. On-line sample processing methods in flow analysis

    DEFF Research Database (Denmark)

    Miró, Manuel; Hansen, Elo Harald

    2008-01-01

    In this chapter, the state of the art of flow injection and related approaches thereof for automation and miniaturization of sample processing regardless of the aggregate state of the sample medium is overviewed. The potential of the various generation of flow injection for implementation of in...

  10. Biased sampling, over-identified parameter problems and beyond

    CERN Document Server

    Qin, Jing

    2017-01-01

    This book is devoted to biased sampling problems (also called choice-based sampling in Econometrics parlance) and over-identified parameter estimation problems. Biased sampling problems appear in many areas of research, including Medicine, Epidemiology and Public Health, the Social Sciences and Economics. The book addresses a range of important topics, including case and control studies, causal inference, missing data problems, meta-analysis, renewal process and length biased sampling problems, capture and recapture problems, case cohort studies, exponential tilting genetic mixture models etc. The goal of this book is to make it easier for Ph. D students and new researchers to get started in this research area. It will be of interest to all those who work in the health, biological, social and physical sciences, as well as those who are interested in survey methodology and other areas of statistical science, among others. .

  11. Scale problems in assessment of hydrogeological parameters of groundwater flow models

    Science.gov (United States)

    Nawalany, Marek; Sinicyn, Grzegorz

    2015-09-01

    An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.

  12. Scale problems in assessment of hydrogeological parameters of groundwater flow models

    Directory of Open Access Journals (Sweden)

    Nawalany Marek

    2015-09-01

    Full Text Available An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i spatial extent and geometry of hydrogeological system, (ii spatial continuity and granularity of both natural and man-made objects within the system, (iii duration of the system and (iv continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale – scale of pores, meso-scale – scale of laboratory sample, macro-scale – scale of typical blocks in numerical models of groundwater flow, local-scale – scale of an aquifer/aquitard and regional-scale – scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here. Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.

  13. Lightweight link dimensioning using sFlow sampling

    DEFF Research Database (Denmark)

    de Oliviera Schmidt, Ricardo; Sadre, Ramin; Sperotto, Anna

    2013-01-01

    not be trivial in high-speed links. Aiming scalability, operators often deploy packet sampling on monitoring, but little is known how it affects link dimensioning. In this paper we assess the feasibility of lightweight link dimensioning using sFlow, which is a widely-deployed traffic monitoring tool. We...... implement sFlow sampling algorithm and use a previously proposed and validated dimensioning formula that needs traffic variance. We validate our approach using packet captures from real networks. Results show that the proposed procedure is successful for a range of sampling rates and that, due to randomness...... of sampling algorithm, the error introduced by scaling the traffic variance yields more conservative results that cope with short-term traffic fluctuations....

  14. Characteristics-based modelling of flow problems

    International Nuclear Information System (INIS)

    Saarinen, M.

    1994-02-01

    The method of characteristics is an exact way to proceed to the solution of hyperbolic partial differential equations. The numerical solutions, however, are obtained in the fixed computational grid where interpolations of values between the mesh points cause numerical errors. The Piecewise Linear Interpolation Method, PLIM, the utilization of which is based on the method of characteristics, has been developed to overcome these deficiencies. The thesis concentrates on the computer simulation of the two-phase flow. The main topics studied are: (1) the PLIM method has been applied to study the validity of the numerical scheme through solving various flow problems to achieve knowledge for the further development of the method, (2) the mathematical and physical validity and applicability of the two-phase flow equations based on the SFAV (Separation of the two-phase Flow According to Velocities) approach has been studied, and (3) The SFAV approach has been further developed for particular cases such as stratified horizontal two-phase flow. (63 refs., 4 figs.)

  15. Using a genetic algorithm to solve fluid-flow problems

    International Nuclear Information System (INIS)

    Pryor, R.J.

    1990-01-01

    Genetic algorithms are based on the mechanics of the natural selection and natural genetics processes. These algorithms are finding increasing application to a wide variety of engineering optimization and machine learning problems. In this paper, the authors demonstrate the use of a genetic algorithm to solve fluid flow problems. Specifically, the authors use the algorithm to solve the one-dimensional flow equations for a pipe

  16. Isospectral Flows for the Inhomogeneous String Density Problem

    Science.gov (United States)

    Górski, Andrzej Z.; Szmigielski, Jacek

    2018-02-01

    We derive isospectral flows of the mass density in the string boundary value problem corresponding to general boundary conditions. In particular, we show that certain class of rational flows produces in a suitable limit all flows generated by polynomials in negative powers of the spectral parameter. We illustrate the theory with concrete examples of isospectral flows of discrete mass densities which we prove to be Hamiltonian and for which we provide explicit solutions of equations of motion in terms of Stieltjes continued fractions and Hankel determinants.

  17. Sample handling for kinetics and molecular assembly in flow cytometry

    Energy Technology Data Exchange (ETDEWEB)

    Sklar, L.A. [Los Alamos National Lab., NM (United States). National Flow Cytometry Resource]|[Univ. of New Mexico, Albuquerque, NM (United States). School of Medicine; Seamer, L.C.; Kuckuck, F.; Prossnitz, E.; Edwards, B. [Univ. of New Mexico, Albuquerque, NM (United States). School of Medicine; Posner, G. [Northern Arizona Univ., Flagstaff, AZ (United States). Dept. of Chemistry

    1998-07-01

    Flow cytometry discriminates particle associated fluorescence from the fluorescence of the surrounding medium. It permits assemblies of macromolecular complexes on beads or cells to be detected in real-time with precision and specificity. The authors have investigated two types of robust sample handling systems which provide sub-second resolution and high throughput: (1) mixers which use stepper-motor driven syringes to initiate chemical reactions in msec time frames; and (2) flow injection controllers with valves and automated syringes used in chemical process control. In the former system, the authors used fast valves to overcome the disparity between mixing 100 {micro}ls of sample in 100 msecs and delivering sample to a flow cytometer at 1 {micro}l/sec. Particles were detected within 100 msec after mixing, but turbulence was created which lasted for 1 sec after injection of the sample into the flow cytometer. They used optical criteria to discriminate particles which were out of alignment due to the turbulent flow. Complex sample handling protocols involving multiple mixing steps and sample dilution have also been achieved. With the latter system they were able to automate sample handling and delivery with intervals of a few seconds. The authors used a fluidic approach to defeat turbulence caused by sample introduction. By controlling both sheath and sample with individual syringes, the period of turbulence was reduced to {approximately} 200 msecs. Automated sample handling and sub-second resolution should permit broad analytical and diagnostic applications of flow cytometry.

  18. A finite element method for flow problems in blast loading

    International Nuclear Information System (INIS)

    Forestier, A.; Lepareux, M.

    1984-06-01

    This paper presents a numerical method which describes fast dynamic problems in flow transient situations as in nuclear plants. A finite element formulation has been chosen; it is described by a preprocessor in CASTEM system: GIBI code. For these typical flow problems, an A.L.E. formulation for physical equations is used. So, some applications are presented: the well known problem of shock tube, the same one in 2D case and a last application to hydrogen detonation

  19. Highly simplified lateral flow-based nucleic acid sample preparation and passive fluid flow control

    Science.gov (United States)

    Cary, Robert E.

    2015-12-08

    Highly simplified lateral flow chromatographic nucleic acid sample preparation methods, devices, and integrated systems are provided for the efficient concentration of trace samples and the removal of nucleic acid amplification inhibitors. Methods for capturing and reducing inhibitors of nucleic acid amplification reactions, such as humic acid, using polyvinylpyrrolidone treated elements of the lateral flow device are also provided. Further provided are passive fluid control methods and systems for use in lateral flow assays.

  20. Highly simplified lateral flow-based nucleic acid sample preparation and passive fluid flow control

    Energy Technology Data Exchange (ETDEWEB)

    Cary, Robert B.

    2018-04-17

    Highly simplified lateral flow chromatographic nucleic acid sample preparation methods, devices, and integrated systems are provided for the efficient concentration of trace samples and the removal of nucleic acid amplification inhibitors. Methods for capturing and reducing inhibitors of nucleic acid amplification reactions, such as humic acid, using polyvinylpyrrolidone treated elements of the lateral flow device are also provided. Further provided are passive fluid control methods and systems for use in lateral flow assays.

  1. Problems of mixed convection flow regime map in a vertical cylinder

    International Nuclear Information System (INIS)

    Kang, Gyeong Uk; Chung, Bum Jin

    2012-01-01

    One of the technical issues by the development of the VHTR is the mixed convection, which is the regime of heat transfer that occurs when the driving forces of both forced and natural convection are of comparable orders of magnitude. In vertical internal flows, the buoyancy force acts upward only, but forced flows can move either upward or downward. Thus, there are two types of mixed convection flows, depending on the direction of the forced flow. When the directions of the forced flow and buoyancy are the same, the flow is a buoyancy aided flow; when they are opposite, the flow is a buoyancy opposed flow. In laminar flows, buoyancy aided flow shows enhanced heat transfer compared to the pure forced convection and buoyancy opposed flow shows impaired heat transfer due to the flow velocity affected by the buoyancy forces. In turbulent flows, however, buoyancy opposed flows shows enhanced heat transfer due to increased turbulence production and buoyancy aided flow shows impaired heat transfer at low buoyancy forces and as the buoyancy increases, the heat transfer restores and at further increases of the buoyancy forces, the heat transfer is enhanced. It is of primary interests to classify which convection regime is mainly dominant. The methods most used to classify between forced, mixed and natural convection have been to refer to the classical flow regime map suggested by Meta is and Eckert. During the course of fundamental literature studies on this topic, it is found that there are some problems on the flow regime map in a vertical cylinder. This paper is to discuss problems identified through reviewing the papers composed in the classical flow regime map. We have tried to reproduce the flow regime map independently using the data obtained from the literatures and compared with the classical flow regime map and finally, the problems on this topic were discussed

  2. Reference Priors For Non-Normal Two-Sample Problems

    NARCIS (Netherlands)

    Fernández, C.; Steel, M.F.J.

    1997-01-01

    The reference prior algorithm (Berger and Bernardo, 1992) is applied to locationscale models with any regular sampling density. A number of two-sample problems is analyzed in this general context, extending the dierence, ratio and product of Normal means problems outside Normality, while explicitly

  3. Two- and three-index formulations of the minimum cost multicommodity k-splittable flow problem

    DEFF Research Database (Denmark)

    Gamst, Mette; Jensen, Peter Neergaard; Pisinger, David

    2010-01-01

    The multicommodity flow problem (MCFP) considers the efficient routing of commodities from their origins to their destinations subject to capacity restrictions and edge costs. Baier et al. [G. Baier, E. Köhler, M. Skutella, On the k-splittable flow problem, in: 10th Annual European Symposium...... of commodities has to be satisfied at the lowest possible cost. The problem has applications in transportation problems where a number of commodities must be routed, using a limited number of distinct transportation units for each commodity. Based on a three-index formulation by Truffot et al. [J. Truffot, C...... on Algorithms, 2002, 101–113] introduced the maximum flow multicommodity k-splittable flow problem (MCkFP) where each commodity may use at most k paths between its origin and its destination. This paper studies the -hard minimum cost multicommodity k-splittable flow problem (MCMCkFP) in which a given flow...

  4. Numerical solution of pipe flow problems for generalized Newtonian fluids

    International Nuclear Information System (INIS)

    Samuelsson, K.

    1993-01-01

    In this work we study the stationary laminar flow of incompressible generalized Newtonian fluids in a pipe with constant arbitrary cross-section. The resulting nonlinear boundary value problems can be written in a variational formulation and solved using finite elements and the augmented Lagrangian method. The solution of the boundary value problem is obtained by finding a saddle point of the augmented Lagrangian. In the algorithm the nonlinear part of the equations is treated locally and the solution is obtained by iteration between this nonlinear problem and a global linear problem. For the solution of the linear problem we use the SSOR preconditioned conjugate gradient method. The approximating problem is solved on a sequence of adaptively refined grids. A scheme for adjusting the value of the crucial penalization parameter of the augmented Lagrangian is proposed. Applications to pipe flow and a problem from the theory of capacities are given. (author) (34 refs.)

  5. Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling

    Science.gov (United States)

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220

  6. Discrete bat algorithm for optimal problem of permutation flow shop scheduling.

    Science.gov (United States)

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.

  7. Marriage in Honey Bees Optimization Algorithm for Flow-shop Problems

    Directory of Open Access Journals (Sweden)

    Pedro PALOMINOS

    2012-01-01

    Full Text Available The objective of this work is to make a comparative study of the Marriage in Honeybees Op-timization (MBO metaheuristic for flow-shop scheduling problems. This paper is focused on the design possibilities of the mating flight space shared by queens and drones. The proposed algorithm uses a 2-dimensional torus as an explicit mating space instead of the simulated an-nealing one in the original MBO. After testing different alternatives with benchmark datasets, the results show that the modeled and implemented metaheuristic is effective to solve flow-shop type problems, providing a new approach to solve other NP-Hard problems.

  8. Solving Minimum Cost Multi-Commodity Network Flow Problem ...

    African Journals Online (AJOL)

    ADOWIE PERE

    2018-03-23

    Mar 23, 2018 ... network-based modeling framework for integrated fixed and mobile ... Minimum Cost Network Flow Problem (MCNFP) and some ..... Unmanned Aerial Vehicle Routing in Traffic. Incident ... Ph.D. Thesis, Dept. of Surveying &.

  9. Wood flow problems in the Swedish forestry

    Energy Technology Data Exchange (ETDEWEB)

    Carlsson, Dick [Forestry Research Inst. of Sweden, Uppsala (Sweden); Roennqvist, M. [Linkoeping Univ. (Sweden). Dept. of Mathematics

    1998-12-31

    In this paper we give an overview of the wood-flow in Sweden including a description of organization and planning. Based on that, we will describe a number of applications or problem areas in the wood-flow chain that are currently considered by the Swedish forest companies to be important and potential in order to improve overall operations. We have focused on applications which are short term planning or operative planning. We do not give any final results as much of the development is currently ongoing or is still in a planning phase. Instead we describe what kind of models and decision support systems that could be applied in order to improve co-operation within and integration of the wood-flow chain 13 refs, 20 figs, 1 tab

  10. Parallel Simulation of Three-Dimensional Free Surface Fluid Flow Problems

    International Nuclear Information System (INIS)

    BAER, THOMAS A.; SACKINGER, PHILIP A.; SUBIA, SAMUEL R.

    1999-01-01

    Simulation of viscous three-dimensional fluid flow typically involves a large number of unknowns. When free surfaces are included, the number of unknowns increases dramatically. Consequently, this class of problem is an obvious application of parallel high performance computing. We describe parallel computation of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact fines. The Galerkin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-static solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of unknowns. Other issues discussed are the proper constraints appearing along the dynamic contact line in three dimensions. Issues affecting efficient parallel simulations include problem decomposition to equally distribute computational work among a SPMD computer and determination of robust, scalable preconditioners for the distributed matrix systems that must be solved. Solution continuation strategies important for serial simulations have an enhanced relevance in a parallel coquting environment due to the difficulty of solving large scale systems. Parallel computations will be demonstrated on an example taken from the coating flow industry: flow in the vicinity of a slot coater edge. This is a three dimensional free surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another region. As such, a significant fraction of the computational time is devoted to processing boundary data. Discussion focuses on parallel speed ups for fixed problem size, a class of problems of immediate practical importance

  11. Literature Review on the Hybrid Flow Shop Scheduling Problem with Unrelated Parallel Machines

    Directory of Open Access Journals (Sweden)

    Eliana Marcela Peña Tibaduiza

    2017-01-01

    Full Text Available Context: The flow shop hybrid problem with unrelated parallel machines has been less studied in the academia compared to the flow shop hybrid with identical processors. For this reason, there are few reports about the kind of application of this problem in industries. Method: A literature review of the state of the art on flow-shop scheduling problem was conducted by collecting and analyzing academic papers on several scientific databases. For this aim, a search query was constructed using keywords defining the problem and checking the inclusion of unrelated parallel machines in such definition; as a result, 50 papers were finally selected for this study. Results: A classification of the problem according to the characteristics of the production system was performed, also solution methods, constraints and objective functions commonly used are presented. Conclusions: An increasing trend is observed in studies of flow shop with multiple stages, but few are based on industry case-studies.

  12. Simulating quantum correlations as a distributed sampling problem

    International Nuclear Information System (INIS)

    Degorre, Julien; Laplante, Sophie; Roland, Jeremie

    2005-01-01

    It is known that quantum correlations exhibited by a maximally entangled qubit pair can be simulated with the help of shared randomness, supplemented with additional resources, such as communication, postselection or nonlocal boxes. For instance, in the case of projective measurements, it is possible to solve this problem with protocols using one bit of communication or making one use of a nonlocal box. We show that this problem reduces to a distributed sampling problem. We give a new method to obtain samples from a biased distribution, starting with shared random variables following a uniform distribution, and use it to build distributed sampling protocols. This approach allows us to derive, in a simpler and unified way, many existing protocols for projective measurements, and extend them to positive operator value measurements. Moreover, this approach naturally leads to a local hidden variable model for Werner states

  13. Strongly coupled single-phase flow problems: Effects of density variation, hydrodynamic dispersion, and first order decay

    Energy Technology Data Exchange (ETDEWEB)

    Oldenburg, C.M.; Pruess, K. [Lawrence Berkeley Laboratory, Berkeley, CA (United States)

    1995-03-01

    We have developed TOUGH2 modules for strongly coupled flow and transport that include full hydrodynamic dispersion. T2DM models tow-dimensional flow and transport in systems with variable salinity, while T32DMR includes radionuclide transport with first-order decay of a parent-daughter chain of radionuclide components in variable salinity systems. T2DM has been applied to a variety of coupled flow problems including the pure solutal convection problem of Elder and the mixed free and forced convection salt-dome flow problem. In the Elder and salt-dome flow problems, density changes of up to 20% caused by brine concentration variations lead to strong coupling between the velocity and brine concentration fields. T2DM efficiently calculates flow and transport for these problems. We have applied T2DMR to the dispersive transport and decay of radionuclide tracers in flow fields with permeability heterogeneities and recirculating flows. Coupling in these problems occurs by velocity-dependent hydrodynamic dispersion. Our results show that the maximum daughter species concentration may occur fully within a recirculating or low-velocity region. In all of the problems, we observe very efficient handling of the strongly coupled flow and transport processes.

  14. Dual plane problems for creeping flow of power-law incompressible medium

    Directory of Open Access Journals (Sweden)

    Dmitriy S. Petukhov

    2016-09-01

    Full Text Available In this paper, we consider the class of solutions for a creeping plane flow of incompressible medium with power-law rheology, which are written in the form of the product of arbitrary power of the radial coordinate by arbitrary function of the angular coordinate of the polar coordinate system covering the plane. This class of solutions represents the asymptotics of fields in the vicinity of singular points in the domain occupied by the examined medium. We have ascertained the duality of two problems for a plane with wedge-shaped notch, at which boundaries in one of the problems the vector components of the surface force vanish, while in the other—the vanishing components are the vector components of velocity, We have investigated the asymptotics and eigensolutions of the dual nonlinear eigenvalue problems in relation to the rheological exponent and opening angle of the notch for the branch associated with the eigenvalue of the Hutchinson–Rice–Rosengren problem learned from the problem of stress distribution over a notched plane for a power law medium. In the context of the dual problem we have determined the velocity distribution in the flow of power-law medium at the vertex of a rigid wedge, We have also found another two eigenvalues, one of which was determined by V. V. Sokolovsky for the problem of power-law fluid flow in a convergent channel.

  15. Distribution-Preserving Stratified Sampling for Learning Problems.

    Science.gov (United States)

    Cervellera, Cristiano; Maccio, Danilo

    2017-06-09

    The need for extracting a small sample from a large amount of real data, possibly streaming, arises routinely in learning problems, e.g., for storage, to cope with computational limitations, obtain good training/test/validation sets, and select minibatches for stochastic gradient neural network training. Unless we have reasons to select the samples in an active way dictated by the specific task and/or model at hand, it is important that the distribution of the selected points is as similar as possible to the original data. This is obvious for unsupervised learning problems, where the goal is to gain insights on the distribution of the data, but it is also relevant for supervised problems, where the theory explains how the training set distribution influences the generalization error. In this paper, we analyze the technique of stratified sampling from the point of view of distances between probabilities. This allows us to introduce an algorithm, based on recursive binary partition of the input space, aimed at obtaining samples that are distributed as much as possible as the original data. A theoretical analysis is proposed, proving the (greedy) optimality of the procedure together with explicit error bounds. An adaptive version of the algorithm is also introduced to cope with streaming data. Simulation tests on various data sets and different learning tasks are also provided.

  16. Contribution of Fuzzy Minimal Cost Flow Problem by Possibility Programming

    OpenAIRE

    S. Fanati Rashidi; A. A. Noora

    2010-01-01

    Using the concept of possibility proposed by zadeh, luhandjula ([4,8]) and buckley ([1]) have proposed the possibility programming. The formulation of buckley results in nonlinear programming problems. Negi [6]re-formulated the approach of Buckley by the use of trapezoidal fuzzy numbers and reduced the problem into fuzzy linear programming problem. Shih and Lee ([7]) used the Negi approach to solve a minimum cost flow problem, whit fuzzy costs and the upper and lower bound. ...

  17. A point implicit time integration technique for slow transient flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Kadioglu, Samet Y., E-mail: kadioglu@yildiz.edu.tr [Department of Mathematical Engineering, Yildiz Technical University, 34210 Davutpasa-Esenler, Istanbul (Turkey); Berry, Ray A., E-mail: ray.berry@inl.gov [Idaho National Laboratory, P.O. Box 1625, MS 3840, Idaho Falls, ID 83415 (United States); Martineau, Richard C. [Idaho National Laboratory, P.O. Box 1625, MS 3840, Idaho Falls, ID 83415 (United States)

    2015-05-15

    Highlights: • This new method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods. • It is unconditionally stable, as a fully implicit method would be. • It exhibits the simplicity of implementation of an explicit method. • It is specifically designed for slow transient flow problems of long duration such as can occur inside nuclear reactor coolant systems. • Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust. - Abstract: We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation of explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very

  18. A point implicit time integration technique for slow transient flow problems

    International Nuclear Information System (INIS)

    Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.

    2015-01-01

    Highlights: • This new method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods. • It is unconditionally stable, as a fully implicit method would be. • It exhibits the simplicity of implementation of an explicit method. • It is specifically designed for slow transient flow problems of long duration such as can occur inside nuclear reactor coolant systems. • Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust. - Abstract: We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation of explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very

  19. Automated Blood Sample Preparation Unit (ABSPU) for Portable Microfluidic Flow Cytometry.

    Science.gov (United States)

    Chaturvedi, Akhil; Gorthi, Sai Siva

    2017-02-01

    Portable microfluidic diagnostic devices, including flow cytometers, are being developed for point-of-care settings, especially in conjunction with inexpensive imaging devices such as mobile phone cameras. However, two pervasive drawbacks of these have been the lack of automated sample preparation processes and cells settling out of sample suspensions, leading to inaccurate results. We report an automated blood sample preparation unit (ABSPU) to prevent blood samples from settling in a reservoir during loading of samples in flow cytometers. This apparatus automates the preanalytical steps of dilution and staining of blood cells prior to microfluidic loading. It employs an assembly with a miniature vibration motor to drive turbulence in a sample reservoir. To validate performance of this system, we present experimental evidence demonstrating prevention of blood cell settling, cell integrity, and staining of cells prior to flow cytometric analysis. This setup is further integrated with a microfluidic imaging flow cytometer to investigate cell count variability. With no need for prior sample preparation, a drop of whole blood can be directly introduced to the setup without premixing with buffers manually. Our results show that integration of this assembly with microfluidic analysis provides a competent automation tool for low-cost point-of-care blood-based diagnostics.

  20. Flow-shop scheduling problem under uncertainties: Review and trends

    Directory of Open Access Journals (Sweden)

    Eliana María González-Neira

    2017-03-01

    Full Text Available Among the different tasks in production logistics, job scheduling is one of the most important at the operational decision-making level to enable organizations to achieve competiveness. Scheduling consists in the allocation of limited resources to activities over time in order to achieve one or more optimization objectives. Flow-shop (FS scheduling problems encompass the sequencing processes in environments in which the activities or operations are performed in a serial flow. This type of configuration includes assembly lines and the chemical, electronic, food, and metallurgical industries, among others. Scheduling has been mostly investigated for the deterministic cases, in which all parameters are known in advance and do not vary over time. Nevertheless, in real-world situations, events are frequently subject to uncertainties that can affect the decision-making process. Thus, it is important to study scheduling and sequencing activities under uncertainties since they can cause infeasibilities and disturbances. The purpose of this paper is to provide a general overview of the FS scheduling problem under uncertainties and its role in production logistics and to draw up opportunities for further research. To this end, 100 papers about FS and flexible flow-shop scheduling problems published from 2001 to October 2016 were analyzed and classified. Trends in the reviewed literature are presented and finally some research opportunities in the field are proposed.

  1. Flow-shop scheduling problem under uncertainties: Review and trends

    OpenAIRE

    Eliana María González-Neira; Jairo R. Montoya-Torres; David Barrera

    2017-01-01

    Among the different tasks in production logistics, job scheduling is one of the most important at the operational decision-making level to enable organizations to achieve competiveness. Scheduling consists in the allocation of limited resources to activities over time in order to achieve one or more optimization objectives. Flow-shop (FS) scheduling problems encompass the sequencing processes in environments in which the activities or operations are performed in a serial flow. This type of co...

  2. Exact partial solution to the compressible flow problems of jet formation and penetration in plane, steady flow

    International Nuclear Information System (INIS)

    Karpp, R.R.

    1984-01-01

    The particle solution of the problem of the symmetric impact of two compressible fluid stream is derived. The plane two-dimensional flow is assumed to be steady, and the inviscid compressible fluid is of the Chaplygin (tangent gas) type. The equations governing this flow are transformed to the hodograph plane where an exact, closed-form solution for the stream function is obtained. The distribution of fluid properties along the plane of symmetry and the shape of free surface streamlines are determined by transformation back to the physical plane. The problem of a compressible fluid jet penetrating an infinite target of similar material is also solved by considering a limiting case of this solution. Differences between compressible and incompressible flows of the type considered are illustrated

  3. Topology optimization of unsteady flow problems using the lattice Boltzmann method

    DEFF Research Database (Denmark)

    Nørgaard, Sebastian Arlund; Sigmund, Ole; Lazarov, Boyan Stefanov

    2016-01-01

    This article demonstrates and discusses topology optimization for unsteady incompressible fluid flows. The fluid flows are simulated using the lattice Boltzmann method, and a partial bounceback model is implemented to model the transition between fluid and solid phases in the optimization problems...

  4. Progress with multigrid schemes for hypersonic flow problems

    International Nuclear Information System (INIS)

    Radespiel, R.; Swanson, R.C.

    1995-01-01

    Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm employs upwind spatial discretization with explicit multistage time stepping. Two-level versions of the various multigrid algorithms are applied to the two-dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving three different hypersonic flow problems. Some new multigrid schemes based on semicoarsening strategies are shown to be quite effective in relieving the stiffness caused by the high-aspect-ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 X 10 6 and Mach numbers up to 25. 32 refs., 31 figs., 1 tab

  5. Topology optimization of 3D Stokes flow problems

    DEFF Research Database (Denmark)

    Gersborg-Hansen, Allan; Sigmund, Ole; Bendsøe, Martin P.

    fluid mechanics. In future practice a muTAS could be used by doctors, engineers etc. as a hand held device with short reaction time that provides on-site analysis of a flowing substance such as blood, polluted water or similar. Borrvall and Petersson [2] paved the road for using the topology...... particular at micro scales since they are easily manufacturable and maintenance free. Here we consider topology optimization of 3D Stokes flow problems which is a reasonable fluid model to use at small scales. The presentation elaborates on effects caused by 3D fluid modelling on the design. Numerical...

  6. Finite element methods for incompressible flow problems

    CERN Document Server

    John, Volker

    2016-01-01

    This book explores finite element methods for incompressible flow problems: Stokes equations, stationary Navier-Stokes equations, and time-dependent Navier-Stokes equations. It focuses on numerical analysis, but also discusses the practical use of these methods and includes numerical illustrations. It also provides a comprehensive overview of analytical results for turbulence models. The proofs are presented step by step, allowing readers to more easily understand the analytical techniques.

  7. A review of scheduling problem and resolution methods in flexible flow shop

    Directory of Open Access Journals (Sweden)

    Tian-Soon Lee

    2019-01-01

    Full Text Available The Flexible flow shop (FFS is defined as a multi-stage flow shops with multiple parallel machines. FFS scheduling problem is a complex combinatorial problem which has been intensively studied in many real world industries. This review paper gives a comprehensive exploration review on the FFS scheduling problem and guides the reader by considering and understanding different environmental assumptions, system constraints and objective functions for future research works. The published papers are classified into two categories. First is the FFS system characteristics and constraints including the problem differences and limitation defined by different studies. Second, the scheduling performances evaluation are elaborated and categorized into time, job and multi related objectives. In addition, the resolution approaches that have been used to solve FFS scheduling problems are discussed. This paper gives a comprehensive guide for the reader with respect to future research work on the FFS scheduling problem.

  8. A trust region interior point algorithm for optimal power flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Wang Min [Hefei University of Technology (China). Dept. of Electrical Engineering and Automation; Liu Shengsong [Jiangsu Electric Power Dispatching and Telecommunication Company (China). Dept. of Automation

    2005-05-01

    This paper presents a new algorithm that uses the trust region interior point method to solve nonlinear optimal power flow (OPF) problems. The OPF problem is solved by a primal/dual interior point method with multiple centrality corrections as a sequence of linearized trust region sub-problems. It is the trust region that controls the linear step size and ensures the validity of the linear model. The convergence of the algorithm is improved through the modification of the trust region sub-problem. Numerical results of standard IEEE systems and two realistic networks ranging in size from 14 to 662 buses are presented. The computational results show that the proposed algorithm is very effective to optimal power flow applications, and favors the successive linear programming (SLP) method. Comparison with the predictor/corrector primal/dual interior point (PCPDIP) method is also made to demonstrate the superiority of the multiple centrality corrections technique. (author)

  9. High order methods for incompressible fluid flow: Application to moving boundary problems

    Energy Technology Data Exchange (ETDEWEB)

    Bjoentegaard, Tormod

    2008-04-15

    Fluid flows with moving boundaries are encountered in a large number of real life situations, with two such types being fluid-structure interaction and free-surface flows. Fluid-structure phenomena are for instance apparent in many hydrodynamic applications; wave effects on offshore structures, sloshing and fluid induced vibrations, and aeroelasticity; flutter and dynamic response. Free-surface flows can be considered as a special case of a fluid-fluid interaction where one of the fluids are practically inviscid, such as air. This type of flows arise in many disciplines such as marine hydrodynamics, chemical engineering, material processing, and geophysics. The driving forces for free-surface flows may be of large scale such as gravity or inertial forces, or forces due to surface tension which operate on a much smaller scale. Free-surface flows with surface tension as a driving mechanism include the flow of bubbles and droplets, and the evolution of capillary waves. In this work we consider incompressible fluid flow, which are governed by the incompressible Navier-Stokes equations. There are several challenges when simulating moving boundary problems numerically, and these include - Spatial discretization - Temporal discretization - Imposition of boundary conditions - Solution strategy for the linear equations. These are some of the issues which will be addressed in this introduction. We will first formulate the problem in the arbitrary Lagrangian-Eulerian framework, and introduce the weak formulation of the problem. Next, we discuss the spatial and temporal discretization before we move to the imposition of surface tension boundary conditions. In the final section we discuss the solution of the resulting linear system of equations. (Author). refs., figs., tabs

  10. Adaptive boundary conditions for exterior flow problems

    CERN Document Server

    Boenisch, V; Wittwer, S

    2003-01-01

    We consider the problem of solving numerically the stationary incompressible Navier-Stokes equations in an exterior domain in two dimensions. This corresponds to studying the stationary fluid flow past a body. The necessity to truncate for numerical purposes the infinite exterior domain to a finite domain leads to the problem of finding appropriate boundary conditions on the surface of the truncated domain. We solve this problem by providing a vector field describing the leading asymptotic behavior of the solution. This vector field is given in the form of an explicit expression depending on a real parameter. We show that this parameter can be determined from the total drag exerted on the body. Using this fact we set up a self-consistent numerical scheme that determines the parameter, and hence the boundary conditions and the drag, as part of the solution process. We compare the values of the drag obtained with our adaptive scheme with the results from using traditional constant boundary conditions. Computati...

  11. Problems involved in sampling within and outside zones of emission

    Energy Technology Data Exchange (ETDEWEB)

    Oelschlaeger, W

    1973-01-01

    Problems involved in the sampling of plant materials both inside and outside emission zones are considered, especially in regard to trace element analysis. The basic problem revolves around obtaining as accurately as possible an average sample of actual composition. Elimination of error possibilities requires a knowledge of such factors as botanical composition, vegetation states, rains, mass losses in leaf and blossom parts, contamination through the soil, and gaseous or particulate emissions. Sampling and preparation of samples is also considered with respect to quantitative aspects of trace element analysis.

  12. Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution

    Energy Technology Data Exchange (ETDEWEB)

    Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhao, Changhong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zamzam, Admed S. [University of Minnesota; Sidiropoulos, Nicholas D. [University of Minnesota; Taylor, Josh A. [University of Toronto

    2018-01-12

    This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successive convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.

  13. Contribution of Fuzzy Minimal Cost Flow Problem by Possibility Programming

    Directory of Open Access Journals (Sweden)

    S. Fanati Rashidi

    2010-06-01

    Full Text Available Using the concept of possibility proposed by zadeh, luhandjula ([4,8] and buckley ([1] have proposed the possibility programming. The formulation of buckley results in nonlinear programming problems. Negi [6]re-formulated the approach of Buckley by the use of trapezoidal fuzzy numbers and reduced the problem into fuzzy linear programming problem. Shih and Lee ([7] used the Negi approach to solve a minimum cost flow problem, whit fuzzy costs and the upper and lower bound. In this paper we shall consider the general form of this problem where all of the parameters and variables are fuzzy and also a model for solving is proposed

  14. Analytical methods for heat transfer and fluid flow problems

    CERN Document Server

    Weigand, Bernhard

    2015-01-01

    This book describes useful analytical methods by applying them to real-world problems rather than solving the usual over-simplified classroom problems. The book demonstrates the applicability of analytical methods even for complex problems and guides the reader to a more intuitive understanding of approaches and solutions. Although the solution of Partial Differential Equations by numerical methods is the standard practice in industries, analytical methods are still important for the critical assessment of results derived from advanced computer simulations and the improvement of the underlying numerical techniques. Literature devoted to analytical methods, however, often focuses on theoretical and mathematical aspects and is therefore useless to most engineers. Analytical Methods for Heat Transfer and Fluid Flow Problems addresses engineers and engineering students. The second edition has been updated, the chapters on non-linear problems and on axial heat conduction problems were extended. And worked out exam...

  15. A polynomial time algorithm for solving the maximum flow problem in directed networks

    International Nuclear Information System (INIS)

    Tlas, M.

    2015-01-01

    An efficient polynomial time algorithm for solving maximum flow problems has been proposed in this paper. The algorithm is basically based on the binary representation of capacities; it solves the maximum flow problem as a sequence of O(m) shortest path problems on residual networks with nodes and m arcs. It runs in O(m"2r) time, where is the smallest integer greater than or equal to log B , and B is the largest arc capacity of the network. A numerical example has been illustrated using this proposed algorithm.(author)

  16. Determination of free boundary problem of flow through porous media

    International Nuclear Information System (INIS)

    Tavares Junior, H.M.; Souza, A.J. de

    1989-01-01

    This paper deals with a free boundary problem of flow through porous media, which is solved by simplicial method conbined with mesh refinement. Variational method on fixed domain is utilized. (author)

  17. Problems of unsteady temperature measurements in a pulsating flow of gas

    International Nuclear Information System (INIS)

    Olczyk, A

    2008-01-01

    Unsteady flow temperature is one of the most difficult and complex flow parameters to measure. Main problems concern insufficient dynamic properties of applied sensors and an interpretation of recorded signals, composed of static and dynamic temperatures. An attempt is made to solve these two problems in the case of measurements conducted in a pulsating flow of gas in the 0–200 Hz range of frequencies, which corresponds to real conditions found in exhaust pipes of modern diesel engines. As far as sensor dynamics is concerned, an analysis of requirements related to the thermometer was made, showing that there was no possibility of assuring such a high frequency band within existing solutions. Therefore, a method of double-channel correction of sensor dynamics was proposed and experimentally tested. The results correspond well with the calculations made by means of the proposed model of sensor dynamics. In the case of interpretation of the measured temperature signal, a method for distinguishing its two components was proposed. This decomposition considerably helps with a correct interpretation of unsteady flow phenomena in pipes

  18. Modified FlowCAM procedure for quantifying size distribution of zooplankton with sample recycling capacity.

    Directory of Open Access Journals (Sweden)

    Esther Wong

    Full Text Available We have developed a modified FlowCAM procedure for efficiently quantifying the size distribution of zooplankton. The modified method offers the following new features: 1 prevents animals from settling and clogging with constant bubbling in the sample container; 2 prevents damage to sample animals and facilitates recycling by replacing the built-in peristaltic pump with an external syringe pump, in order to generate negative pressure, creates a steady flow by drawing air from the receiving conical flask (i.e. vacuum pump, and transfers plankton from the sample container toward the main flowcell of the imaging system and finally into the receiving flask; 3 aligns samples in advance of imaging and prevents clogging with an additional flowcell placed ahead of the main flowcell. These modifications were designed to overcome the difficulties applying the standard FlowCAM procedure to studies where the number of individuals per sample is small, and since the FlowCAM can only image a subset of a sample. Our effective recycling procedure allows users to pass the same sample through the FlowCAM many times (i.e. bootstrapping the sample in order to generate a good size distribution. Although more advanced FlowCAM models are equipped with syringe pump and Field of View (FOV flowcells which can image all particles passing through the flow field; we note that these advanced setups are very expensive, offer limited syringe and flowcell sizes, and do not guarantee recycling. In contrast, our modifications are inexpensive and flexible. Finally, we compared the biovolumes estimated by automated FlowCAM image analysis versus conventional manual measurements, and found that the size of an individual zooplankter can be estimated by the FlowCAM image system after ground truthing.

  19. Improved teaching-learning-based and JAYA optimization algorithms for solving flexible flow shop scheduling problems

    Science.gov (United States)

    Buddala, Raviteja; Mahapatra, Siba Sankar

    2017-11-01

    Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.

  20. TOUGH Simulations of the Updegraff's Set of Fluid and Heat Flow Problems

    Energy Technology Data Exchange (ETDEWEB)

    Moridis, G.J.; Pruess (editor), K.

    1992-11-01

    The TOUGH code [Pruess, 1987] for two-phase flow of water, air, and heat in penneable media has been exercised on a suite of test problems originally selected and simulated by C. D. Updegraff [1989]. These include five 'verification' problems for which analytical or numerical solutions are available, and three 'validation' problems that model laboratory fluid and heat flow experiments. All problems could be run without any code modifications (*). Good and efficient numerical performance, as well as accurate results were obtained throughout. Additional code verification and validation problems from the literature are briefly summarized, and suggestions are given for proper applications of TOUGH and related codes.

  1. Temperature and flow fields in samples heated in monoellipsoidal mirror furnaces

    Science.gov (United States)

    Rivas, D.; Haya, R.

    The temperature field in samples heated in monoellipsoidal mirror furnaces will be analyzed. The radiation heat exchange between the sample and the mirror is formulated analytically, taking into account multiple reflections at the mirror. It will be shown that the effect of these multiple reflections in the heating process is quite important, and, as a consequence, the effect of the mirror reflectance in the temperature field is quite strong. The conduction-radiation model will be used to simulate the heating process in the floating-zone technique in microgravity conditions; important parameters like the Marangoni number (that drives the thermocapillary flow in the melt), and the temperature gradient at the melt-crystal interface will be estimated. The model will be validated comparing with experimental data. The case of samples mounted in a wall-free configuration (as in the MAXUS-4 programme) will be also considered. Application to the case of compound samples (graphite-silicon-graphite) will be made; the melting of the silicon part and the surface temperature distribution in the melt will be analyzed. Of special interest is the temperature difference between the two graphite rods that hold the silicon part, since it drives the thermocapillary flow in the melt. This thermocapillary flow will be studied, after coupling the previous model with the convective effects. The possibility of counterbalancing this flow by the controlled vibration of the graphite rods will be studied as well. Numerical results show that suppressing the thermocapillary flow can be accomplished quite effectively.

  2. MULTICRITERIA HYBRID FLOW SHOP SCHEDULING PROBLEM: LITERATURE REVIEW, ANALYSIS, AND FUTURE RESEARCH

    Directory of Open Access Journals (Sweden)

    Marcia de Fatima Morais

    2014-12-01

    Full Text Available This research focuses on the Hybrid Flow Shop production scheduling problem, which is one of the most difficult problems to solve. The literature points to several studies that focus the Hybrid Flow Shop scheduling problem with monocriteria functions. Despite of the fact that, many real world problems involve several objective functions, they can often compete and conflict, leading researchers to concentrate direct their efforts on the development of methods that take consider this variant into consideration. The goal of the study is to review and analyze the methods in order to solve the Hybrid Flow Shop production scheduling problem with multicriteria functions in the literature. The analyses were performed using several papers that have been published over the years, also the parallel machines types, the approach used to develop solution methods, the type of method develop, the objective function, the performance criterion adopted, and the additional constraints considered. The results of the reviewing and analysis of 46 papers showed opportunities for future research on this topic, including the following: (i use uniform and dedicated parallel machines, (ii use exact and metaheuristics approaches, (iv develop lower and uppers bounds, relations of dominance and different search strategies to improve the computational time of the exact methods,  (v develop  other types of metaheuristic, (vi work with anticipatory setups, and (vii add constraints faced by the production systems itself.

  3. Adaptive sampling method in deep-penetration particle transport problem

    International Nuclear Information System (INIS)

    Wang Ruihong; Ji Zhicheng; Pei Lucheng

    2012-01-01

    Deep-penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, a kind of particle transport random walking system under the emission point as a sampling station is built. Then, an adaptive sampling scheme is derived for better solution with the achieved information. The main advantage of the adaptive scheme is to choose the most suitable sampling number from the emission point station to obtain the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is introduced. Its main principle is to define the importance function due to the particle state and to ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive scheme under the emission point as a station could overcome the difficulty of underestimation of the result in some degree, and the adaptive importance sampling method gets satisfied results as well. (authors)

  4. Discrete particle swarm optimization to solve multi-objective limited-wait hybrid flow shop scheduling problem

    Science.gov (United States)

    Santosa, B.; Siswanto, N.; Fiqihesa

    2018-04-01

    This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution

  5. An Analytical Model for Multilayer Well Production Evaluation to Overcome Cross-Flow Problem

    KAUST Repository

    Hakiki, Farizal; Wibowo, Aris T.; Rahmawati, Silvya D.; Yasutra, Amega; Sukarno, Pudjo

    2017-01-01

    One of the major concerns in a multi-layer system is that interlayer cross-flow may occur if reservoir fluids are produced from commingled layers that have unequal initial pressures. Reservoir would commonly have bigger average reservoir pressure (pore fluid pressure) as it goes deeper. The phenomenon is, however, not followed by the reservoir productivity or injectivity. The existence of reservoir with quite low average-pressure and high injectivity would tend experiencing the cross-flow problem. It is a phenomenon of fluid from bottom layer flowing into upper layer. It would strict upper-layer fluid to flow into wellbore. It is as if there is an injection treatment from bottom layer. The study deploys productivity index an approach parameter taking into account of cross-flow problem instead of injectivity index since it is a production well. The analytical study is to model the reservoir multilayer by addressing to avoid cross-flow problem. The analytical model employed hypothetical and real field data to test it. The scope of this study are: (a) Develop mathematical-based solution to determine the production rate from each layer; (b) Assess different scenarios to optimize production rate, those are: pump setting depth and performance of in-situ choke (ISC) installation. The ISC is acting as an inflow control device (ICD) alike that help to reduce cross-flow occurrence. This study employed macro program to write the code and develop the interface. Fast iterative procedure happens on solving the analytical model. Comparison results recognized that the mathematical-based solution shows a good agreement with the commercial software derived results.

  6. An Analytical Model for Multilayer Well Production Evaluation to Overcome Cross-Flow Problem

    KAUST Repository

    Hakiki, Farizal

    2017-10-17

    One of the major concerns in a multi-layer system is that interlayer cross-flow may occur if reservoir fluids are produced from commingled layers that have unequal initial pressures. Reservoir would commonly have bigger average reservoir pressure (pore fluid pressure) as it goes deeper. The phenomenon is, however, not followed by the reservoir productivity or injectivity. The existence of reservoir with quite low average-pressure and high injectivity would tend experiencing the cross-flow problem. It is a phenomenon of fluid from bottom layer flowing into upper layer. It would strict upper-layer fluid to flow into wellbore. It is as if there is an injection treatment from bottom layer. The study deploys productivity index an approach parameter taking into account of cross-flow problem instead of injectivity index since it is a production well. The analytical study is to model the reservoir multilayer by addressing to avoid cross-flow problem. The analytical model employed hypothetical and real field data to test it. The scope of this study are: (a) Develop mathematical-based solution to determine the production rate from each layer; (b) Assess different scenarios to optimize production rate, those are: pump setting depth and performance of in-situ choke (ISC) installation. The ISC is acting as an inflow control device (ICD) alike that help to reduce cross-flow occurrence. This study employed macro program to write the code and develop the interface. Fast iterative procedure happens on solving the analytical model. Comparison results recognized that the mathematical-based solution shows a good agreement with the commercial software derived results.

  7. On non-permutation solutions to some two machine flow shop scheduling problems

    NARCIS (Netherlands)

    V. Strusevich (Vitaly); P.J. Zwaneveld (Peter)

    1994-01-01

    textabstractIn this paper, we study two versions of the two machine flow shop scheduling problem, where schedule length is to be minimized. First, we consider the two machine flow shop with setup, processing, and removal times separated. It is shown that an optimal solution need not be a permutation

  8. Chance constrained problems: penalty reformulation and performance of sample approximation technique

    Czech Academy of Sciences Publication Activity Database

    Branda, Martin

    2012-01-01

    Roč. 48, č. 1 (2012), s. 105-122 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional research plan: CEZ:AV0Z10750506 Keywords : chance constrained problems * penalty functions * asymptotic equivalence * sample approximation technique * investment problem Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.619, year: 2012 http://library.utia.cas.cz/separaty/2012/E/branda-chance constrained problems penalty reformulation and performance of sample approximation technique.pdf

  9. Solving implicit multi-mesh flow and conjugate heat transfer problems with RELAP-7

    International Nuclear Information System (INIS)

    Zou, L.; Peterson, J.; Zhao, H.; Zhang, H.; Andrs, D.; Martineau, R.

    2013-01-01

    The fully implicit simulation capability of RELAP-7 to solve multi-mesh flow and conjugate heat transfer problems for reactor system safety analysis is presented. Compared to general single-mesh simulations, the reactor system safety analysis-type of code has unique challenges due to its highly simplified, interconnected, one-dimensional, and zero-dimensional flow network describing multiple physics with significantly different time and length scales. To use the Jacobian-free Newton Krylov-type of solver, preconditioning is generally required for the Krylov method. The uniqueness of the reactor safety analysis-type of code in treating the interconnected flow network and conjugate heat transfer also introduces challenges in providing preconditioning matrix. Typical flow and conjugate heat transfer problems involved in reactor safety analysis using RELAP-7, as well as the special treatment on the preconditioning matrix are presented in detail. (authors)

  10. Flow proportional sampling of low level liquid effluent

    International Nuclear Information System (INIS)

    Colley, D.; Jenkins, R.

    1989-01-01

    A flow proportional sampler for use on low level radioactive liquid effluent has been developed for installation on all CEGB nuclear power stations. The sampler, operates by drawing effluent continuously from the main effluent pipeline, through a sampler loop and returning it to the pipeline. The effluent in this loop is sampled by taking small, frequent aliquots using a linear acting shuttle valve. The frequency of operation of this valve is controlled by a flowmeter installed in the effluent line; sampling rate being directly proportional to effluent flowrate. (author)

  11. Optimal Results and Numerical Simulations for Flow Shop Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Tao Ren

    2012-01-01

    Full Text Available This paper considers the m-machine flow shop problem with two objectives: makespan with release dates and total quadratic completion time, respectively. For Fm|rj|Cmax, we prove the asymptotic optimality for any dense scheduling when the problem scale is large enough. For Fm‖ΣCj2, improvement strategy with local search is presented to promote the performance of the classical SPT heuristic. At the end of the paper, simulations show the effectiveness of the improvement strategy.

  12. Existence and uniqueness of solution for a model problem of transonic flow

    International Nuclear Information System (INIS)

    Tangmanee, S.

    1985-11-01

    A model problem of transonic flow ''the Tricomi equation'' bounded by the rectangular-curve boundary is studied. We transform the model problem into a symmetric positive system and an admissible boundary condition is posed. We show that with some conditions the existence and uniqueness of the solution are guaranteed. (author)

  13. Mixed hybrid finite elements and streamline computation for the potential flow problem

    NARCIS (Netherlands)

    Kaasschieter, E.F.; Huijben, A.J.M.

    1992-01-01

    An important class of problems in mathematical physics involves equations of the form -¿ · (A¿¿) = f. In a variety of problems it is desirable to obtain an accurate approximation of the flow quantity u = -A¿¿. Such an accurate approximation can be determined by the mixed finite element method. In

  14. Solving probabilistic inverse problems rapidly with prior samples

    NARCIS (Netherlands)

    Käufl, Paul; Valentine, Andrew P.; de Wit, Ralph W.; Trampert, Jeannot

    2016-01-01

    Owing to the increasing availability of computational resources, in recent years the probabilistic solution of non-linear, geophysical inverse problems by means of sampling methods has become increasingly feasible. Nevertheless, we still face situations in which a Monte Carlo approach is not

  15. Finite element flow analysis; Proceedings of the Fourth International Symposium on Finite Element Methods in Flow Problems, Chuo University, Tokyo, Japan, July 26-29, 1982

    Science.gov (United States)

    Kawai, T.

    Among the topics discussed are the application of FEM to nonlinear free surface flow, Navier-Stokes shallow water wave equations, incompressible viscous flows and weather prediction, the mathematical analysis and characteristics of FEM, penalty function FEM, convective, viscous, and high Reynolds number FEM analyses, the solution of time-dependent, three-dimensional and incompressible Navier-Stokes equations, turbulent boundary layer flow, FEM modeling of environmental problems over complex terrain, and FEM's application to thermal convection problems and to the flow of polymeric materials in injection molding processes. Also covered are FEMs for compressible flows, including boundary layer flows and transonic flows, hybrid element approaches for wave hydrodynamic loadings, FEM acoustic field analyses, and FEM treatment of free surface flow, shallow water flow, seepage flow, and sediment transport. Boundary element methods and FEM computational technique topics are also discussed. For individual items see A84-25834 to A84-25896

  16. Grid dependency of wall heat transfer for simulation of natural convection flow problems

    NARCIS (Netherlands)

    Loomans, M.G.L.C.; Seppänen, O.; Säteri, J.

    2007-01-01

    In the indoor environment natural convection is a well known air flow phenomenon. In numerical simulations applying the CFD technique it is also known as a flow problem that is difficult to solve. Alternatives are available to overcome the limitations of the default approach (standard k-e model with

  17. Permutation flow-shop scheduling problem to optimize a quadratic objective function

    Science.gov (United States)

    Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu

    2017-09-01

    A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.

  18. New scheduling rules for a dynamic flexible flow line problem with sequence-dependent setup times

    Science.gov (United States)

    Kia, Hamidreza; Ghodsypour, Seyed Hassan; Davoudpour, Hamid

    2017-09-01

    In the literature, the application of multi-objective dynamic scheduling problem and simple priority rules are widely studied. Although these rules are not efficient enough due to simplicity and lack of general insight, composite dispatching rules have a very suitable performance because they result from experiments. In this paper, a dynamic flexible flow line problem with sequence-dependent setup times is studied. The objective of the problem is minimization of mean flow time and mean tardiness. A 0-1 mixed integer model of the problem is formulated. Since the problem is NP-hard, four new composite dispatching rules are proposed to solve it by applying genetic programming framework and choosing proper operators. Furthermore, a discrete-event simulation model is made to examine the performances of scheduling rules considering four new heuristic rules and the six adapted heuristic rules from the literature. It is clear from the experimental results that composite dispatching rules that are formed from genetic programming have a better performance in minimization of mean flow time and mean tardiness than others.

  19. Managing the Budget: Stock-Flow Reasoning and the CO2 Accumulation Problem.

    Science.gov (United States)

    Newell, Ben R; Kary, Arthur; Moore, Chris; Gonzalez, Cleotilde

    2016-01-01

    The majority of people show persistent poor performance in reasoning about "stock-flow problems" in the laboratory. An important example is the failure to understand the relationship between the "stock" of CO2 in the atmosphere, the "inflow" via anthropogenic CO2 emissions, and the "outflow" via natural CO2 absorption. This study addresses potential causes of reasoning failures in the CO2 accumulation problem and reports two experiments involving a simple re-framing of the task as managing an analogous financial (rather than CO2 ) budget. In Experiment 1 a financial version of the task that required participants to think in terms of controlling debt demonstrated significant improvements compared to a standard CO2 accumulation problem. Experiment 2, in which participants were invited to think about managing savings, suggested that this improvement was fortuitous and coincidental rather than due to a fundamental change in understanding the stock-flow relationships. The role of graphical information in aiding or abetting stock-flow reasoning was also explored in both experiments, with the results suggesting that graphs do not always assist understanding. The potential for leveraging the kind of reasoning exhibited in such tasks in an effort to change people's willingness to reduce CO2 emissions is briefly discussed. Copyright © 2015 Cognitive Science Society, Inc.

  20. Automated injection of slurry samples in flow-injection analysis

    NARCIS (Netherlands)

    Hulsman, M.H.F.M.; Hulsman, M.; Bos, M.; van der Linden, W.E.

    1996-01-01

    Two types of injectors are described for introducing solid samples as slurries in flow analysis systems. A time-based and a volume-based injector based on multitube solenoid pinch valves were built, both can be characterized as hydrodynamic injectors. Reproducibility of the injections of dispersed

  1. Parallel patterns determination in solving cyclic flow shop problem with setups

    Directory of Open Access Journals (Sweden)

    Bożejko Wojciech

    2017-06-01

    Full Text Available The subject of this work is the new idea of blocks for the cyclic flow shop problem with setup times, using multiple patterns with different sizes determined for each machine constituting optimal schedule of cities for the traveling salesman problem (TSP. We propose to take advantage of the Intel Xeon Phi parallel computing environment during so-called ’blocks’ determination basing on patterns, in effect significantly improving the quality of obtained results.

  2. Scheduling stochastic two-machine flow shop problems to minimize expected makespan

    Directory of Open Access Journals (Sweden)

    Mehdi Heydari

    2013-07-01

    Full Text Available During the past few years, despite tremendous contribution on deterministic flow shop problem, there are only limited number of works dedicated on stochastic cases. This paper examines stochastic scheduling problems in two-machine flow shop environment for expected makespan minimization where processing times of jobs are normally distributed. Since jobs have stochastic processing times, to minimize the expected makespan, the expected sum of the second machine’s free times is minimized. In other words, by minimization waiting times for the second machine, it is possible to reach the minimum of the objective function. A mathematical method is proposed which utilizes the properties of the normal distributions. Furthermore, this method can be used as a heuristic method for other distributions, as long as the means and variances are available. The performance of the proposed method is explored using some numerical examples.

  3. Testing Homogeneity in a Semiparametric Two-Sample Problem

    Directory of Open Access Journals (Sweden)

    Yukun Liu

    2012-01-01

    Full Text Available We study a two-sample homogeneity testing problem, in which one sample comes from a population with density f(x and the other is from a mixture population with mixture density (1−λf(x+λg(x. This problem arises naturally from many statistical applications such as test for partial differential gene expression in microarray study or genetic studies for gene mutation. Under the semiparametric assumption g(x=f(xeα+βx, a penalized empirical likelihood ratio test could be constructed, but its implementation is hindered by the fact that there is neither feasible algorithm for computing the test statistic nor available research results on its theoretical properties. To circumvent these difficulties, we propose an EM test based on the penalized empirical likelihood. We prove that the EM test has a simple chi-square limiting distribution, and we also demonstrate its competitive testing performances by simulations. A real-data example is used to illustrate the proposed methodology.

  4. Dynamic flow-through approaches for metal fractionation in environmentally relevant solid samples

    DEFF Research Database (Denmark)

    Miró, Manuel; Hansen, Elo Harald; Chomchoei, Roongrat

    2005-01-01

    generations of flow-injection analysis. Special attention is also paid to a novel, robust, non-invasive approach for on-site continuous sampling of soil solutions, capitalizing on flow-through microdialysis, which presents itself as an appealing complementary approach to the conventional lysimeter experiments...

  5. Heuristics methods for the flow shop scheduling problem with separated setup times

    Directory of Open Access Journals (Sweden)

    Marcelo Seido Nagano

    2012-06-01

    Full Text Available This paper deals with the permutation flow shop scheduling problem with separated machine setup times. As a result of an investigation on the problem characteristics, four heuristics methods are proposed with procedures of the construction sequencing solution by an analogy with the asymmetric traveling salesman problem with the objective of minimizing makespan. Experimental results show that one of the new heuristics methods proposed provide high quality solutions in comparisons with the evaluated methods considered in the literature.

  6. A Special Class of Univalent Functions in Hele-Shaw Flow Problems

    Directory of Open Access Journals (Sweden)

    Paula Curt

    2011-01-01

    Full Text Available We study the time evolution of the free boundary of a viscous fluid for planar flows in Hele-Shaw cells under injection. Applying methods from the theory of univalent functions, we prove the invariance in time of Φ-likeness property (a geometric property which includes starlikeness and spiral-likeness for two basic cases: the inner problem and the outer problem. We study both zero and nonzero surface tension models. Certain particular cases are also presented.

  7. Robust numerical methods for boundary-layer equations for a model problem of flow over a symmetric curved surface

    NARCIS (Netherlands)

    A.R. Ansari; B. Hossain; B. Koren (Barry); G.I. Shishkin (Gregori)

    2007-01-01

    textabstractWe investigate the model problem of flow of a viscous incompressible fluid past a symmetric curved surface when the flow is parallel to its axis. This problem is known to exhibit boundary layers. Also the problem does not have solutions in closed form, it is modelled by boundary-layer

  8. A New Artificial Immune System Algorithm for Multiobjective Fuzzy Flow Shop Problems

    Directory of Open Access Journals (Sweden)

    Cengiz Kahraman

    2009-12-01

    Full Text Available In this paper a new artificial immune system (AIS algorithm is proposed to solve multi objective fuzzy flow shop scheduling problems. A new mutation operator is also described for this AIS. Fuzzy sets are used to model processing times and due dates. The objectives are to minimize the average tardiness and the number of tardy jobs. The developed new AIS algorithm is tested on real world data collected at an engine cylinder liner manufacturing process. The feasibility and effectiveness of the proposed AIS is demonstrated by comparing it with genetic algorithms. Computational results demonstrate that the proposed AIS algorithm is more effective meta-heuristic for multi objective flow shop scheduling problems with fuzzy processing time and due date.

  9. Problem and Pathological Gambling in a Sample of Casino Patrons

    OpenAIRE

    Fong, Timothy W.; Campos, Michael D.; Brecht, Mary-Lynn; Davis, Alice; Marco, Adrienne; Pecanha, Viviane; Rosenthal, Richard J.

    2010-01-01

    Relatively few studies have examined gambling problems among individuals in a casino setting. The current study sought to examine the prevalence of gambling problems among a sample of casino patrons and examine alcohol and tobacco use, health status, and quality of life by gambling problem status. To these ends, 176 casino patrons were recruited by going to a Southern California casino and requesting that they complete an anonymous survey. Results indicated the following lifetime rates for at...

  10. On the problems of PPS sampling in multi-character surveys ...

    African Journals Online (AJOL)

    This paper, which is on the problems of PPS sampling in multi-character surveys, compares the efficiency of some estimators used in PPSWR sampling for multiple characteristics. From a superpopulation model, we computed the expected variances of the different estimators for each of the first two finite populations ...

  11. An analytical solution to the heat transfer problem in thick-walled hunt flow

    International Nuclear Information System (INIS)

    Bluck, Michael J; Wolfendale, Michael J

    2017-01-01

    Highlights: • Convective heat transfer in Hunt type flow of a liquid metal in a rectangular duct. • Analytical solution to the H1 constant peripheral temperature in a rectangular duct. • New H1 result demonstrating the enhancement of heat transfer due to flow distortion by the applied magnetic field. • Analytical solution to the H2 constant peripheral heat flux in a rectangular duct. • New H2 result demonstrating the reduction of heat transfer due to flow distortion by the applied magnetic field. • Results are important for validation of CFD in magnetohydrodynamics and for implementation of systems code approaches. - Abstract: The flow of a liquid metal in a rectangular duct, subject to a strong transverse magnetic field is of interest in a number of applications. An important application of such flows is in the context of coolants in fusion reactors, where heat is transferred to a lead-lithium eutectic. It is vital, therefore, that the heat transfer mechanisms are understood. Forced convection heat transfer is strongly dependent on the flow profile. In the hydrodynamic case, Nusselt numbers and the like, have long been well characterised in duct geometries. In the case of liquid metals in strong magnetic fields (magnetohydrodynamics), the flow profiles are very different and one can expect a concomitant effect on convective heat transfer. For fully developed laminar flows, the magnetohydrodynamic problem can be characterised in terms of two coupled partial differential equations. The problem of heat transfer for perfectly electrically insulating boundaries (Shercliff case) has been studied previously (Bluck et al., 2015). In this paper, we demonstrate corresponding analytical solutions for the case of conducting hartmann walls of arbitrary thickness. The flow is very different from the Shercliff case, exhibiting jets near the side walls and core flow suppression which have profound effects on heat transfer.

  12. Study of the Riemann problem and construction of multidimensional Godunov-type schemes for two-phase flow models

    International Nuclear Information System (INIS)

    Toumi, I.

    1990-04-01

    This thesis is devoted to the study of the Riemann problem and the construction of Godunov type numerical schemes for one or two dimensional two-phase flow models. In the first part, we study the Riemann problem for the well-known Drift-Flux, model which has been widely used for the analysis of thermal hydraulics transients. Then we use this study to construct approximate Riemann solvers and we describe the corresponding Godunov type schemes for simplified equation of state. For computation of complex two-phase flows, a weak formulation of Roe's approximate Riemann solver, which gives a method to construct a Roe-averaged jacobian matrix with a general equation of state, is proposed. For two-dimensional flows, the developed methods are based upon an approximate solver for a two-dimensional Riemann problem, according to Harten-Lax-Van Leer principles. The numerical results for standard test problems show the good behaviour of these numerical schemes for a wide range of flow conditions [fr

  13. Multilevel markov chain monte carlo method for high-contrast single-phase flow problems

    KAUST Repository

    Efendiev, Yalchin R.

    2014-12-19

    In this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates. © Global Science Press Limited 2015.

  14. A discrete firefly meta-heuristic with local search for makespan minimization in permutation flow shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Nader Ghaffari-Nasab

    2010-07-01

    Full Text Available During the past two decades, there have been increasing interests on permutation flow shop with different types of objective functions such as minimizing the makespan, the weighted mean flow-time etc. The permutation flow shop is formulated as a mixed integer programming and it is classified as NP-Hard problem. Therefore, a direct solution is not available and meta-heuristic approaches need to be used to find the near-optimal solutions. In this paper, we present a new discrete firefly meta-heuristic to minimize the makespan for the permutation flow shop scheduling problem. The results of implementation of the proposed method are compared with other existing ant colony optimization technique. The preliminary results indicate that the new proposed method performs better than the ant colony for some well known benchmark problems.

  15. Study of flow over object problems by a nodal discontinuous Galerkin-lattice Boltzmann method

    Science.gov (United States)

    Wu, Jie; Shen, Meng; Liu, Chen

    2018-04-01

    The flow over object problems are studied by a nodal discontinuous Galerkin-lattice Boltzmann method (NDG-LBM) in this work. Different from the standard lattice Boltzmann method, the current method applies the nodal discontinuous Galerkin method into the streaming process in LBM to solve the resultant pure convection equation, in which the spatial discretization is completed on unstructured grids and the low-storage explicit Runge-Kutta scheme is used for time marching. The present method then overcomes the disadvantage of standard LBM for depending on the uniform meshes. Moreover, the collision process in the LBM is completed by using the multiple-relaxation-time scheme. After the validation of the NDG-LBM by simulating the lid-driven cavity flow, the simulations of flows over a fixed circular cylinder, a stationary airfoil and rotating-stationary cylinders are performed. Good agreement of present results with previous results is achieved, which indicates that the current NDG-LBM is accurate and effective for flow over object problems.

  16. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using

  17. Two phase flow problems in power station boilers

    International Nuclear Information System (INIS)

    Firman, E.C.

    1974-01-01

    The paper outlines some of the waterside thermal and hydrodynamic phenomena relating to design and operation of large boilers in central power stations. The associated programme of work is described with an outline of some results already obtained. By way of introduction, the principal features of conventional and nuclear drum boilers and once-through nuclear heat exchangers are described in so far as they pertain to this area of work. This is followed by discussion of the relevant physical phenomena and problems which arise. For example, the problem of steam entrainment from the drum into the tubes connecting it to the furnace wall tubes is related to its effects on circulation and possible mechanisms of tube failure. Other problems concern the transient associated with start-up or low load operation of plant. The requirement for improved mathematical representation of steady and dynamic performance is mentioned together with the corresponding need for data on heat transfer, pressure loss, hydrodynamic stability, consequences of deposits, etc. The paper concludes with reference to the work being carried out within the C.E.G.B. in relation to the above problems. The facilities employed and the specific studies being made on them are described: these range from field trials on operational boilers to small scale laboratory investigations of underlying two phase flow mechanisms and include high pressure water rigs and a freon rig for simulation studies

  18. An open-flow pulse ionization chamber for alpha spectrometry of large-area samples

    International Nuclear Information System (INIS)

    Johansson, L.; Roos, B.; Samuelsson, C.

    1992-01-01

    The presented open-flow pulse ionization chamber was developed to make alpha spectrometry on large-area surfaces easy. One side of the chamber is left open, where the sample is to be placed. The sample acts as a chamber wall and therby defeins the detector volume. The sample area can be as large as 400 cm 2 . To prevent air from entering the volume there is a constant gas flow through the detector, coming in at the bottom of the chamber and leaking at the sides of the sample. The method results in good energy resolution and has considerable applicability in the retrospective radon research. Alpha spectra obtained in the retrospective measurements descend from 210 Po, built up in the sample from the radon daughters recoiled into a glass surface. (au)

  19. Description of internal flow problems by a boundary integral method with dipole panels

    International Nuclear Information System (INIS)

    Krieg, R.; Hailfinger, G.

    1979-01-01

    In reactor safety studies the failure of single components is postulated or sudden accident loadings are assumed and the consequences are investigated. Often as a first consequence highly transient three dimensional flow problems occur. In contrast to classical flow problems, in most of the above cases the fluid velocities are relatively small whereas the accelerations assume high values. As a consequence both, viscosity effects and dynamic pressures which are proportional to the square of the fluid velocities are usually negligible. For cases, where the excitation times are considerably longer than the times necessary for a wave to traverse characteristic regions of the fluid field, also the fluid compressibility is negligible. Under these conditions boundary integral methods are an appropriate tool to deal with the problem. Flow singularities are distributed over the fluid boundaries in such a way that pressure and velocity fields are obtained which satisfy the boundary conditions. In order to facilitate the numerical treatment the fluid boundaries are approximated by a finite number of panels with uniform singularity distributions on each of them. Consequently the pressure and velocity field of the given problem may be obtained by superposition of the corresponding fields due to these panels with their singularity intensities as unknown factors. Then satisfying the boundary conditions in so many boundary points as panels have been introduced, yields a system of linear equations which in general allows for a unique determination of the unknown intensities. (orig./RW)

  20. A service flow model for the liner shipping network design problem

    DEFF Research Database (Denmark)

    Plum, Christian Edinger Munk; Pisinger, David; Sigurd, Mikkel M.

    2014-01-01

    . The formulation alleviates issues faced by arc flow formulations with regards to handling multiple calls to the same port. A problem which has not been fully dealt with earlier by LSNDP formulations. Multiple calls are handled by introducing service nodes, together with port nodes in a graph representation...... of the network and a penalty for not flowed cargo. The model can be used to design liner shipping networks to utilize a container carrier’s assets efficiently and to investigate possible scenarios of changed market conditions. The model is solved as a Mixed Integer Program. Results are presented for the two...

  1. From "E-flows" to "Sed-flows": Managing the Problem of Sediment in High Altitude Hydropower Systems

    Science.gov (United States)

    Gabbud, C.; Lane, S. N.

    2017-12-01

    The connections between stream hydraulics, geomorphology and ecosystems in mountain rivers have been substantially perturbed by humans, for example through flow regulation related to hydropower activities. It is well known that the ecosystem impacts downstream of hydropower dams may be managed by a properly designed compensation release or environmental flows ("e-flows"), and such flows may also include sediment considerations (e.g. to break up bed armor). However, there has been much less attention given to the ecosystem impacts of water intakes (where water is extracted and transferred for storage and/or power production), even though in many mountain systems such intakes may be prevalent. Flow intakes tend to be smaller than dams and because they fill quickly in the presence of sediment delivery, they often need to be flushed, many times within a day in Alpine glaciated catchments with high sediment yields. The associated short duration "flood" flow is characterised by very high sediment concentrations, which may drastically modify downstream habitat, both during the floods but also due to subsequent accumulation of "legacy" sediment. The impacts on flora and fauna of these systems have not been well studied. In addition, there are no guidelines established that might allow the design of "e-flows" that also treat this sediment problem, something we call "sed-flows". Through an Alpine field example, we quantify the hydrological, geomorphological, and ecosystem impacts of Alpine water transfer systems. The high sediment concentrations of these flushing flows lead to very high rates of channel disturbance downstream, superimposed upon long-term and progressive bed sediment accumulation. Monthly macroinvertebrate surveys over almost a two-year period showed that reductions in the flushing rate reduced rates of disturbance substantially, and led to rapid macroinvertebrate recovery, even in the seasons (autumn and winter) when biological activity should be reduced

  2. Problem and pathological gambling in a sample of casino patrons.

    Science.gov (United States)

    Fong, Timothy W; Campos, Michael D; Brecht, Mary-Lynn; Davis, Alice; Marco, Adrienne; Pecanha, Viviane; Rosenthal, Richard J

    2011-03-01

    Relatively few studies have examined gambling problems among individuals in a casino setting. The current study sought to examine the prevalence of gambling problems among a sample of casino patrons and examine alcohol and tobacco use, health status, and quality of life by gambling problem status. To these ends, 176 casino patrons were recruited by going to a Southern California casino and requesting that they complete an anonymous survey. Results indicated the following lifetime rates for at-risk, problem, and pathological gambling: 29.2, 10.7, and 29.8%. Differences were found with regards to gambling behavior, and results indicated higher rates of smoking among individuals with gambling problems, but not higher rates of alcohol use. Self-rated quality of life was lower among pathological gamblers relative to non-problem gamblers, but did not differ from at-risk or problem gamblers. Although subject to some limitations, our data support the notion of higher frequency of gambling problems among casino patrons and may suggest the need for increased interventions for gambling problems on-site at casinos.

  3. Finite element approximation to a model problem of transonic flow

    International Nuclear Information System (INIS)

    Tangmanee, S.

    1986-12-01

    A model problem of transonic flow ''the Tricomi equation'' in Ω is contained in IR 2 bounded by the rectangular-curve boundary is posed in the form of symmetric positive differential equations. The finite element method is then applied. When the triangulation of Ω-bar is made of quadrilaterals and the approximation space is the Lagrange polynomial, we get the error estimates. 14 refs, 1 fig

  4. Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.

    Science.gov (United States)

    Ćwik, Michał; Józefczyk, Jerzy

    2018-01-01

    An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.

  5. Multi-frequency direct sampling method in inverse scattering problem

    Science.gov (United States)

    Kang, Sangwoo; Lambert, Marc; Park, Won-Kwang

    2017-10-01

    We consider the direct sampling method (DSM) for the two-dimensional inverse scattering problem. Although DSM is fast, stable, and effective, some phenomena remain unexplained by the existing results. We show that the imaging function of the direct sampling method can be expressed by a Bessel function of order zero. We also clarify the previously unexplained imaging phenomena and suggest multi-frequency DSM to overcome traditional DSM. Our method is evaluated in simulation studies using both single and multiple frequencies.

  6. A Local Search Algorithm for the Flow Shop Scheduling Problem with Release Dates

    Directory of Open Access Journals (Sweden)

    Tao Ren

    2015-01-01

    Full Text Available This paper discusses the flow shop scheduling problem to minimize the makespan with release dates. By resequencing the jobs, a modified heuristic algorithm is obtained for handling large-sized problems. Moreover, based on some properties, a local search scheme is provided to improve the heuristic to gain high-quality solution for moderate-sized problems. A sequence-independent lower bound is presented to evaluate the performance of the algorithms. A series of simulation results demonstrate the effectiveness of the proposed algorithms.

  7. The two-sample problem with induced dependent censorship.

    Science.gov (United States)

    Huang, Y

    1999-12-01

    Induced dependent censorship is a general phenomenon in health service evaluation studies in which a measure such as quality-adjusted survival time or lifetime medical cost is of interest. We investigate the two-sample problem and propose two classes of nonparametric tests. Based on consistent estimation of the survival function for each sample, the two classes of test statistics examine the cumulative weighted difference in hazard functions and in survival functions. We derive a unified asymptotic null distribution theory and inference procedure. The tests are applied to trial V of the International Breast Cancer Study Group and show that long duration chemotherapy significantly improves time without symptoms of disease and toxicity of treatment as compared with the short duration treatment. Simulation studies demonstrate that the proposed tests, with a wide range of weight choices, perform well under moderate sample sizes.

  8. Direct sampling methods for inverse elastic scattering problems

    Science.gov (United States)

    Ji, Xia; Liu, Xiaodong; Xi, Yingxia

    2018-03-01

    We consider the inverse elastic scattering of incident plane compressional and shear waves from the knowledge of the far field patterns. Specifically, three direct sampling methods for location and shape reconstruction are proposed using the different component of the far field patterns. Only inner products are involved in the computation, thus the novel sampling methods are very simple and fast to be implemented. With the help of the factorization of the far field operator, we give a lower bound of the proposed indicator functionals for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functionals decay like the Bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functionals continuously dependent on the far field patterns, which further implies that the novel sampling methods are extremely stable with respect to data error. For the case when the observation directions are restricted into the limited aperture, we firstly introduce some data retrieval techniques to obtain those data that can not be measured directly and then use the proposed direct sampling methods for location and shape reconstructions. Finally, some numerical simulations in two dimensions are conducted with noisy data, and the results further verify the effectiveness and robustness of the proposed sampling methods, even for multiple multiscale cases and limited-aperture problems.

  9. Are Flow Injection-based Approaches Suitable for Automated Handling of Solid Samples?

    DEFF Research Database (Denmark)

    Miró, Manuel; Hansen, Elo Harald; Cerdà, Victor

    Flow-based approaches were originally conceived for liquid-phase analysis, implying that constituents in solid samples generally had to be transferred into the liquid state, via appropriate batch pretreatment procedures, prior to analysis. Yet, in recent years, much effort has been focused...... electrolytic or aqueous leaching, on-line dialysis/microdialysis, in-line filtration, and pervaporation-based procedures have been successfully implemented in continuous flow/flow injection systems. In this communication, the new generation of flow analysis, including sequential injection, multicommutated flow.......g., soils, sediments, sludges), and thus, ascertaining the potential mobility, bioavailability and eventual impact of anthropogenic elements on biota [2]. In this context, the principles of sequential injection-microcolumn extraction (SI-MCE) for dynamic fractionation are explained in detail along...

  10. Comparison between correlated sampling and the perturbation technique of MCNP5 for fixed-source problems

    International Nuclear Information System (INIS)

    He Tao; Su Bingjing

    2011-01-01

    Highlights: → The performance of the MCNP differential operator perturbation technique is compared with that of the MCNP correlated sampling method for three types of fixed-source problems. → In terms of precision, the MCNP perturbation technique outperforms correlated sampling for one type of problem but performs comparably with or even under-performs correlated sampling for the other two types of problems. → In terms of accuracy, the MCNP perturbation calculations may predict inaccurate results for some of the test problems. However, the accuracy can be improved if the midpoint correction technique is used. - Abstract: Correlated sampling and the differential operator perturbation technique are two methods that enable MCNP (Monte Carlo N-Particle) to simulate small response change between an original system and a perturbed system. In this work the performance of the MCNP differential operator perturbation technique is compared with that of the MCNP correlated sampling method for three types of fixed-source problems. In terms of precision of predicted response changes, the MCNP perturbation technique outperforms correlated sampling for the problem involving variation of nuclide concentrations in the same direction but performs comparably with or even underperforms correlated sampling for the other two types of problems that involve void or variation of nuclide concentrations in opposite directions. In terms of accuracy, the MCNP differential operator perturbation calculations may predict inaccurate results that deviate from the benchmarks well beyond their uncertainty ranges for some of the test problems. However, the accuracy of the MCNP differential operator perturbation can be improved if the midpoint correction technique is used.

  11. The Planar Sandwich and Other 1D Planar Heat Flow Test Problems in ExactPack

    Energy Technology Data Exchange (ETDEWEB)

    Singleton, Jr., Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-24

    This report documents the implementation of several related 1D heat flow problems in the verification package ExactPack [1]. In particular, the planar sandwich class defined in Ref. [2], as well as the classes PlanarSandwichHot, PlanarSandwichHalf, and other generalizations of the planar sandwich problem, are defined and documented here. A rather general treatment of 1D heat flow is presented, whose main results have been implemented in the class Rod1D. All planar sandwich classes are derived from the parent class Rod1D.

  12. A microfluidic needle for sampling and delivery of chemical signals by segmented flows

    Science.gov (United States)

    Feng, Shilun; Liu, Guozhen; Jiang, Lianmei; Zhu, Yonggang; Goldys, Ewa M.; Inglis, David W.

    2017-10-01

    We have developed a microfluidic needle-like device that can extract and deliver nanoliter samples. The device consists of a T-junction to form segmented flows, parallel channels to and from the needle tip, and seven hydrophilic capillaries at the tip that form a phase-extraction region. The main microchannel is hydrophobic and carries segmented flows of water-in-oil. The hydrophilic capillaries transport the aqueous phase with a nearly zero pressure gradient but require a pressure gradient of 19 kPa for mineral oil to invade and flow through. Using this device, we demonstrate the delivery of nanoliter droplets and demonstrate sampling through the formation of droplets at the tip of our device. During sampling, we recorded the fluorescence intensities of the droplets formed at the tip while varying the concentration of dye outside the tip. We measured a chemical signal response time of approximately 3 s. The linear relationship between the recorded fluorescence intensity of samples and the external dye concentration (10-40 μg/ml) indicates that this device is capable of performing quantitative, real-time measurements of rapidly varying chemical signals.

  13. A Bee Colony Optimization Approach for Mixed Blocking Constraints Flow Shop Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Mostafa Khorramizadeh

    2015-01-01

    Full Text Available The flow shop scheduling problems with mixed blocking constraints with minimization of makespan are investigated. The Taguchi orthogonal arrays and path relinking along with some efficient local search methods are used to develop a metaheuristic algorithm based on bee colony optimization. In order to compare the performance of the proposed algorithm, two well-known test problems are considered. Computational results show that the presented algorithm has comparative performance with well-known algorithms of the literature, especially for the large sized problems.

  14. Digital Rock Simulation of Flow in Carbonate Samples

    Science.gov (United States)

    Klemin, D.; Andersen, M.

    2014-12-01

    Reservoir engineering has becomes more complex to deal with current challenges, so core analysts must understand and model pore geometries and fluid behaviors at pores scales more rapidly and realistically. We introduce an industry-unique direct hydrodynamic pore flow simulator that operates on pore geometries from digital rock models obtained using microCT or 3D scanning electron microscope (SEM) images. The PVT and rheological models used in the simulator represent real reservoir fluids. Fluid-solid interactions are introduced using distributed micro-scale wetting properties. The simulator uses density functional approach applied for hydrodynamics of complex systems. This talk covers selected applications of the simulator. We performed microCT scanning of six different carbonate rock samples from homogeneous limestones to vuggy carbonates. From these, we constructed digital rock models representing pore geometries for the simulator. We simulated nonreactive tracer flow in all six digital models using a digital fluid description that included a passive tracer solution. During the simulation, we evaluated the composition of the effluent. Results of tracer flow simulations corresponded well with experimental data of nonreactive tracer floods for the same carbonate rock types. This simulation data of the non-reactive tracer flow can be used to calculate the volume of the rock accessible by the fluid, which can be further used to predict response of a porous medium to a reactive fluid. The described digital core analysis workflow provides a basis for a wide variety of activities, including input to design acidizing jobs and evaluating treatment efficiency and EOR economics. Digital rock multiphase flow simulations of a scanned carbonate rock evaluated the effect of wettability on flow properties. Various wetting properties were tested: slightly oil wet, slightly water wet, and water wet. Steady-state relative permeability simulations yielded curves for all three

  15. On the Use of Importance Sampling in Particle Transport Problems

    Energy Technology Data Exchange (ETDEWEB)

    Eriksson, B

    1965-06-15

    The idea of importance sampling is applied to the problem of solving integral equations of Fredholm's type. Especially Bolzmann's neutron transport equation is taken into consideration. For the solution of the latter equation, an importance sampling technique is derived from some simple transformations at the original transport equation into a similar equation. Examples of transformations are given, which have been used with great success in practice.

  16. On the Use of Importance Sampling in Particle Transport Problems

    International Nuclear Information System (INIS)

    Eriksson, B.

    1965-06-01

    The idea of importance sampling is applied to the problem of solving integral equations of Fredholm's type. Especially Bolzmann's neutron transport equation is taken into consideration. For the solution of the latter equation, an importance sampling technique is derived from some simple transformations at the original transport equation into a similar equation. Examples of transformations are given, which have been used with great success in practice

  17. ON SAMPLING BASED METHODS FOR THE DUBINS TRAVELING SALESMAN PROBLEM WITH NEIGHBORHOODS

    Directory of Open Access Journals (Sweden)

    Petr Váňa

    2015-12-01

    Full Text Available In this paper, we address the problem of path planning to visit a set of regions by Dubins vehicle, which is also known as the Dubins Traveling Salesman Problem Neighborhoods (DTSPN. We propose a modification of the existing sampling-based approach to determine increasing number of samples per goal region and thus improve the solution quality if a more computational time is available. The proposed modification of the sampling-based algorithm has been compared with performance of existing approaches for the DTSPN and results of the quality of the found solutions and the required computational time are presented in the paper.

  18. Development of a split-flow system for high precision variable sample introduction in supercritical fluid chromatography.

    Science.gov (United States)

    Sakai, Miho; Hayakawa, Yoshihiro; Funada, Yasuhiro; Ando, Takashi; Fukusaki, Eiichiro; Bamba, Takeshi

    2017-09-15

    In this study, we propose a novel variable sample injection system based on full-loop injection, named the split-flow sample introduction system, for application in supercritical fluid chromatography (SFC). In this system, the mobile phase is split by the differential pressure between two back pressure regulators (BPRs) after full-loop injection suitable for SFC, and this differential pressure determines the introduction rate. Nine compounds with a wide range of characteristics were introduced with high reproducibility and universality, confirming that a robust variable sample injection system was achieved. We also investigated the control factors of our proposed system. Sample introduction was controlled by the ratio between the column-side pressure drops in splitless and split flow, ΔP columnsideinsplitless and ΔP columnsideinsplit , respectively, where ΔP columnsideinsplitless is related to the mobile phase flow rate and composition and the column resistance. When all other conditions are kept constant, increasing the make-up flow induces an additional pressure drop on the column side of the system, which leads to a reduced column-side flow rate, and hence decreased the amount of sample injected, even when the net pressure drop on the column side remains the same. Thus, sample introduction could be highly controlled at low sample introduction rate, regardless of the introduction conditions. This feature is advantageous because, as a control factor, the solvent in the make-up pump is independent of the column-side pressure drop. Copyright © 2017. Published by Elsevier B.V.

  19. AN APPLICATION OF FLOW INJECTION ANALYSIS WITH GAS DIFFUSION AND SPECTROPHOTOMETRIC DETECTION FOR THE MONITORING OF DISSOLVED SULPHIDE CONCENTRATION IN ENVIRONMENTAL SAMPLES

    Directory of Open Access Journals (Sweden)

    Malwina Cykowska

    2014-10-01

    Full Text Available The monitoring of the concentration of sulphide is very important from the environment point of view because of high toxicity of hydrogen sulphide. What is more hydrogen sulphide is an important pollution indicator. In many cases the determination of sulphide is very difficult due to complicated matrix of some environmental samples, which causes that most analytical methods cannot be used. Flow injection analysis allows to avoid matrix problem what makes it suitable for a wide range of applications in analytical laboratories. In this paper determination of dissolved sulphide in environmental samples by gas-diffusion flow injection analysis with spectrophotometric detection was presented. Used gas-diffusion separation ensures the elimination of interferences caused by sample matrix and gives the ability of determination of sulphides in coloured and turbid samples. Studies to optimize the measurement conditions and to determine the value of the validation parameters (e.g. limit of detection, limit of quantification, precision, accuracy were carried out. Obtained results confirm the usefulness of the method for monitoring the concentration of dissolved sulphides in water and waste water. Full automation and work in a closed system greatly reduces time of analysis, minimizes consumption of sample and reagents and increases safety of analyst’s work.

  20. The electron transport problem sampling by Monte Carlo individual collision technique

    International Nuclear Information System (INIS)

    Androsenko, P.A.; Belousov, V.I.

    2005-01-01

    The problem of electron transport is of most interest in all fields of the modern science. To solve this problem the Monte Carlo sampling has to be used. The electron transport is characterized by a large number of individual interactions. To simulate electron transport the 'condensed history' technique may be used where a large number of collisions are grouped into a single step to be sampled randomly. Another kind of Monte Carlo sampling is the individual collision technique. In comparison with condensed history technique researcher has the incontestable advantages. For example one does not need to give parameters altered by condensed history technique like upper limit for electron energy, resolution, number of sub-steps etc. Also the condensed history technique may lose some very important tracks of electrons because of its limited nature by step parameters of particle movement and due to weakness of algorithms for example energy indexing algorithm. There are no these disadvantages in the individual collision technique. This report presents some sampling algorithms of new version BRAND code where above mentioned technique is used. All information on electrons was taken from Endf-6 files. They are the important part of BRAND. These files have not been processed but directly taken from electron information source. Four kinds of interaction like the elastic interaction, the Bremsstrahlung, the atomic excitation and the atomic electro-ionization were considered. In this report some results of sampling are presented after comparison with analogs. For example the endovascular radiotherapy problem (P2) of QUADOS2002 was presented in comparison with another techniques that are usually used. (authors)

  1. A filtering technique for solving the advection equation in two-phase flow problems

    International Nuclear Information System (INIS)

    Devals, C.; Heniche, M.; Bertrand, F.; Tanguy, P.A.; Hayes, R.E.

    2004-01-01

    The aim of this work is to develop a numerical strategy for the simulation of two-phase flow in the context of chemical engineering applications. The finite element method has been chosen because of its flexibility to deal with complex geometries. One of the key points of two-phase flow simulation is to determine precisely the position of the interface between the two phases, which is an unknown of the problem. In this case, the interface can be tracked by the advection of the so-called color function. It is well known that the solution of the advection equation by most numerical schemes, including the Streamline Upwind Petrov-Galerkin (SUPG) method, may exhibit spurious oscillations. This work proposes an approach to filter out these oscillations by means of a change of variable that is efficient for both steady state and transient cases. First, the filtering technique will be presented in detail. Then, it will be applied to two-dimensional benchmark problems, namely, the advection skew to the mesh and the Zalesak's problems. (author)

  2. A New Spectral Local Linearization Method for Nonlinear Boundary Layer Flow Problems

    Directory of Open Access Journals (Sweden)

    S. S. Motsa

    2013-01-01

    Full Text Available We propose a simple and efficient method for solving highly nonlinear systems of boundary layer flow problems with exponentially decaying profiles. The algorithm of the proposed method is based on an innovative idea of linearizing and decoupling the governing systems of equations and reducing them into a sequence of subsystems of differential equations which are solved using spectral collocation methods. The applicability of the proposed method, hereinafter referred to as the spectral local linearization method (SLLM, is tested on some well-known boundary layer flow equations. The numerical results presented in this investigation indicate that the proposed method, despite being easy to develop and numerically implement, is very robust in that it converges rapidly to yield accurate results and is more efficient in solving very large systems of nonlinear boundary value problems of the similarity variable boundary layer type. The accuracy and numerical stability of the SLLM can further be improved by using successive overrelaxation techniques.

  3. Performance of Reynolds Averaged Navier-Stokes Models in Predicting Separated Flows: Study of the Hump Flow Model Problem

    Science.gov (United States)

    Cappelli, Daniele; Mansour, Nagi N.

    2012-01-01

    Separation can be seen in most aerodynamic flows, but accurate prediction of separated flows is still a challenging problem for computational fluid dynamics (CFD) tools. The behavior of several Reynolds Averaged Navier-Stokes (RANS) models in predicting the separated ow over a wall-mounted hump is studied. The strengths and weaknesses of the most popular RANS models (Spalart-Allmaras, k-epsilon, k-omega, k-omega-SST) are evaluated using the open source software OpenFOAM. The hump ow modeled in this work has been documented in the 2004 CFD Validation Workshop on Synthetic Jets and Turbulent Separation Control. Only the baseline case is treated; the slot flow control cases are not considered in this paper. Particular attention is given to predicting the size of the recirculation bubble, the position of the reattachment point, and the velocity profiles downstream of the hump.

  4. Privacy problems in the small sample selection

    Directory of Open Access Journals (Sweden)

    Loredana Cerbara

    2013-05-01

    Full Text Available The side of social research that uses small samples for the production of micro data, today finds some operating difficulties due to the privacy law. The privacy code is a really important and necessary law because it guarantees the Italian citizen’s rights, as already happens in other Countries of the world. However it does not seem appropriate to limit once more the possibilities of the data production of the national centres of research. That possibilities are already moreover compromised due to insufficient founds is a common problem becoming more and more frequent in the research field. It would be necessary, therefore, to include in the law the possibility to use telephonic lists to select samples useful for activities directly of interest and importance to the citizen, such as the collection of the data carried out on the basis of opinion polls by the centres of research of the Italian CNR and some universities.

  5. A Priority Rule-Based Heuristic for Resource Investment Project Scheduling Problem with Discounted Cash Flows and Tardiness Penalties

    Directory of Open Access Journals (Sweden)

    Amir Abbas Najafi

    2009-01-01

    Full Text Available Resource investment problem with discounted cash flows (RIPDCFs is a class of project scheduling problem. In RIPDCF, the availability levels of the resources are considered decision variables, and the goal is to find a schedule such that the net present value of the project cash flows optimizes. In this paper, we consider a new RIPDCF in which tardiness of project is permitted with defined penalty. We mathematically formulated the problem and developed a heuristic method to solve it. The results of the performance analysis of the proposed method show an effective solution approach to the problem.

  6. A State-of-the-Art Review of the Sensor Location, Flow Observability, Estimation, and Prediction Problems in Traffic Networks

    Directory of Open Access Journals (Sweden)

    Enrique Castillo

    2015-01-01

    Full Text Available A state-of-the-art review of flow observability, estimation, and prediction problems in traffic networks is performed. Since mathematical optimization provides a general framework for all of them, an integrated approach is used to perform the analysis of these problems and consider them as different optimization problems whose data, variables, constraints, and objective functions are the main elements that characterize the problems proposed by different authors. For example, counted, scanned or “a priori” data are the most common data sources; conservation laws, flow nonnegativity, link capacity, flow definition, observation, flow propagation, and specific model requirements form the most common constraints; and least squares, likelihood, possible relative error, mean absolute relative error, and so forth constitute the bases for the objective functions or metrics. The high number of possible combinations of these elements justifies the existence of a wide collection of methods for analyzing static and dynamic situations.

  7. A direct sampling method to an inverse medium scattering problem

    KAUST Repository

    Ito, Kazufumi; Jin, Bangti; Zou, Jun

    2012-01-01

    In this work we present a novel sampling method for time harmonic inverse medium scattering problems. It provides a simple tool to directly estimate the shape of the unknown scatterers (inhomogeneous media), and it is applicable even when

  8. Marine sampling in Malaysia coastal area: the challenge, problems and solution

    International Nuclear Information System (INIS)

    Norfaizal Mohamed; Khairul Nizam Razali; Mohd Rafaie Mohd Murtadza; Muhammad Amin Abdul Ghani; Zaharudin Ahmad; Abdul Kadir Ishak

    2005-01-01

    Malaysia Marine Radioactivity Database Development Project is one of the five research contracts that was signed between MINT and AELB. Three marine sampling expeditions had been carried out using K.L. PAUS vessel owned by Malaysian Fisheries Institute, Chendering, Terengganu. The first marine sampling expedition was taken place at East Coast Peninsular Malaysia waters on August 2003, followed on February 2004 at West Coast Peninsular Malaysia waters, and lastly at Sarawak-Sabah waters on July 2004. Many challenges and problems were faced when collecting sediment, water, biota and plankton sample during this marine sampling. (Author)

  9. Evaluating the performance of constructive heuristics for the blocking flow shop scheduling problem with setup times

    Directory of Open Access Journals (Sweden)

    Mauricio Iwama Takano

    2019-01-01

    Full Text Available This paper addresses the minimization of makespan for the permutation flow shop scheduling problem with blocking and sequence and machine dependent setup times, a problem not yet studied in previous studies. The 14 best known heuristics for the permutation flow shop problem with blocking and no setup times are pre-sented and then adapted to the problem in two different ways; resulting in 28 different heuristics. The heuristics are then compared using the Taillard database. As there is no other work that addresses the problem with blocking and sequence and ma-chine dependent setup times, a database for the setup times was created. The setup time value was uniformly distributed between 1% and 10%, 50%, 100% and 125% of the processing time value. Computational tests are then presented for each of the 28 heuristics, comparing the mean relative deviation of the makespan, the computational time and the percentage of successes of each method. Results show that the heuristics were capable of providing interesting results.

  10. The electron transport problem sampling by Monte Carlo individual collision technique

    Energy Technology Data Exchange (ETDEWEB)

    Androsenko, P.A.; Belousov, V.I. [Obninsk State Technical Univ. of Nuclear Power Engineering, Kaluga region (Russian Federation)

    2005-07-01

    The problem of electron transport is of most interest in all fields of the modern science. To solve this problem the Monte Carlo sampling has to be used. The electron transport is characterized by a large number of individual interactions. To simulate electron transport the 'condensed history' technique may be used where a large number of collisions are grouped into a single step to be sampled randomly. Another kind of Monte Carlo sampling is the individual collision technique. In comparison with condensed history technique researcher has the incontestable advantages. For example one does not need to give parameters altered by condensed history technique like upper limit for electron energy, resolution, number of sub-steps etc. Also the condensed history technique may lose some very important tracks of electrons because of its limited nature by step parameters of particle movement and due to weakness of algorithms for example energy indexing algorithm. There are no these disadvantages in the individual collision technique. This report presents some sampling algorithms of new version BRAND code where above mentioned technique is used. All information on electrons was taken from Endf-6 files. They are the important part of BRAND. These files have not been processed but directly taken from electron information source. Four kinds of interaction like the elastic interaction, the Bremsstrahlung, the atomic excitation and the atomic electro-ionization were considered. In this report some results of sampling are presented after comparison with analogs. For example the endovascular radiotherapy problem (P2) of QUADOS2002 was presented in comparison with another techniques that are usually used. (authors)

  11. The Average Network Flow Problem: Shortest Path and Minimum Cost Flow Formulations, Algorithms, Heuristics, and Complexity

    Science.gov (United States)

    2012-09-13

    46, 1989. [75] S. Melkote and M.S. Daskin . An integrated model of facility location and transportation network design. Transportation Research Part A ... a work of the U.S. Government and is not subject to copyright protection in the United States. AFIT/DS/ENS/12-09 THE AVERAGE NETWORK FLOW PROBLEM...focused thinking (VFT) are used sparingly, as is the case across the entirety of the supply chain literature. We provide a VFT tutorial for supply chain

  12. Vectorization on the star computer of several numerical methods for a fluid flow problem

    Science.gov (United States)

    Lambiotte, J. J., Jr.; Howser, L. M.

    1974-01-01

    A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.

  13. Self-organizing hybrid Cartesian grid generation and application to external and internal flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Deister, F.; Hirschel, E.H. [Univ. Stuttgart, IAG, Stuttgart (Germany); Waymel, F.; Monnoyer, F. [Univ. de Valenciennes, LME, Valenciennes (France)

    2003-07-01

    An automatic adaptive hybrid Cartesian grid generation and simulation system is presented together with applications. The primary computational grid is an octree Cartesian grid. A quasi-prismatic grid may be added for resolving the boundary layer region of viscous flow around the solid body. For external flow simulations the flow solver TAU from the ''deutsche zentrum fuer luft- und raumfahrt (DLR)'' is integrated in the simulation system. Coarse grids are generated automatically, which are required by the multilevel method. As an application to an internal problem the thermal and dynamic modeling of a subway station is presented. (orig.)

  14. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    Science.gov (United States)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-05-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  15. The Cauchy problem for a model of immiscible gas flow with large data

    Energy Technology Data Exchange (ETDEWEB)

    Sande, Hilde

    2008-12-15

    The thesis consists of an introduction and two papers; 1. The solution of the Cauchy problem with large data for a model of a mixture of gases. 2. Front tracking for a model of immiscible gas flow with large data. (AG) refs, figs

  16. Parent-reported feeding and feeding problems in a sample of Dutch toddlers

    NARCIS (Netherlands)

    Moor, J.M.H. de; Didden, H.C.M.; Korzilius, H.P.L.M.

    2007-01-01

    Little is known about the feeding behaviors and problems with feeding in toddlers. In the present questionnaire study, data were collected on the feeding behaviors and feeding problems in a relatively large (n = 422) sample of Dutch healthy toddlers (i.e. 18-36 months old) who lived at home with

  17. Development of a flow controller for long-term sampling of gases and vapors using evacuated canisters.

    Science.gov (United States)

    Rossner, Alan; Farant, Jean Pierre; Simon, Philippe; Wick, David P

    2002-11-15

    Anthropogenic activities contribute to the release of a wide variety of volatile organic compounds (VOC) into microenvironments. Developing and implementing new air sampling technologies that allow for the characterization of exposures to VOC can be useful for evaluating environmental and health concerns arising from such occurrences. A novel air sampler based on the use of a capillary flow controller connected to evacuated canisters (300 mL, 1 and 6 L) was designed and tested. The capillary tube, used to control the flow of air, is a variation on a sharp-edge orifice flow controller. It essentially controls the velocity of the fluid (air) as a function of the properties of the fluid, tube diameter and length. A model to predict flow rate in this dynamic system was developed. The mathematical model presented here was developed using the Hagen-Poiseuille equation and the ideal gas law to predict flow into the canisters used to sample for long periods of time. The Hagen-Poiseuille equation shows the relationship between flow rate, pressure gradient, capillary resistance, fluid viscosity, capillary length and diameter. The flow rates evaluated were extremely low, ranging from 0.05 to 1 mL min(-1). The model was compared with experimental results and was shown to overestimate the flow rate. Empirical equations were developed to more accurately predict flow for the 300 mL, 1 and 6 L canisters used for sampling periods ranging from several hours to one month. The theoretical and observed flow rates for different capillary geometries were evaluated. Each capillary flow controller geometry that was tested was found to generate very reproducible results, RSD gas chromatograph. The capillary flow controller was found to exceed the performance of the sorbent samplers in this comparison.

  18. Global analysis of the temperature and flow fields in samples heated in multizone resistance furnaces

    Science.gov (United States)

    Pérez-Grande, I.; Rivas, D.; de Pablo, V.

    The temperature field in samples heated in multizone resistance furnaces will be analyzed, using a global model where the temperature fields in the sample, the furnace and the insulation are coupled; the input thermal data is the electric power supplied to the heaters. The radiation heat exchange between the sample and the furnace is formulated analytically, taking into account specular reflections at the sample; for the solid sample the reflectance is both diffuse and specular, and for the melt it is mostly specular. This behavior is modeled through the exchange view factors, which depend on whether the sample is solid or liquid, and, therefore, they are not known a priori. The effect of this specular behavior in the temperature field will be analyzed, by comparing with the case of diffuse samples. A parameter of great importance is the thermal conductivity of the insulation material; it will be shown that the temperature field depends strongly on it. A careful characterization of the insulation is therefore necessary, here it will be done with the aid of experimental results, which will also serve to validate the model. The heating process in the floating-zone technique in microgravity conditions will be simulated; parameters like the Marangoni number or the temperature gradient at the melt-crystal interface will be estimated. Application to the case of compound samples (graphite-silicon-graphite) will be made; the temperature distribution in the silicon part will be studied, especially the temperature difference between the two graphite rods that hold the silicon, since it drives the thermocapillary flow in the melt. This flow will be studied, after coupling the previous model with the convective effects. The possibility of suppresing this flow by the controlled vibration of the graphite rods will be also analyzed. Numerical results show that the thermocapillary flow can indeed be counterbalanced quite effectively.

  19. Predicting flow at work: investigating the activities and job characteristics that predict flow states at work.

    Science.gov (United States)

    Nielsen, Karina; Cleal, Bryan

    2010-04-01

    Flow (a state of consciousness where people become totally immersed in an activity and enjoy it intensely) has been identified as a desirable state with positive effects for employee well-being and innovation at work. Flow has been studied using both questionnaires and Experience Sampling Method (ESM). In this study, we used a newly developed 9-item flow scale in an ESM study combined with a questionnaire to examine the predictors of flow at two levels: the activities (brainstorming, planning, problem solving and evaluation) associated with transient flow states and the more stable job characteristics (role clarity, influence and cognitive demands). Participants were 58 line managers from two companies in Denmark; a private accountancy firm and a public elder care organization. We found that line managers in elder care experienced flow more often than accountancy line managers, and activities such as planning, problem solving, and evaluation predicted transient flow states. The more stable job characteristics included in this study were not, however, found to predict flow at work. Copyright 2010 APA, all rights reserved.

  20. Design and characterization of poly(dimethylsiloxane)-based valves for interfacing continuous-flow sampling to microchip electrophoresis.

    Science.gov (United States)

    Li, Michelle W; Huynh, Bryan H; Hulvey, Matthew K; Lunte, Susan M; Martin, R Scott

    2006-02-15

    This work describes the fabrication and evaluation of a poly(dimethyl)siloxane (PDMS)-based device that enables the discrete injection of a sample plug from a continuous-flow stream into a microchannel for subsequent analysis by electrophoresis. Devices were fabricated by aligning valving and flow channel layers followed by plasma sealing the combined layers onto a glass plate that contained fittings for the introduction of liquid sample and nitrogen gas. The design incorporates a reduced-volume pneumatic valve that actuates (on the order of hundreds of milliseconds) to allow analyte from a continuously flowing sampling channel to be injected into a separation channel for electrophoresis. The injector design was optimized to include a pushback channel to flush away stagnant sample associated with the injector dead volume. The effect of the valve actuation time, the pushback voltage, and the sampling stream flow rate on the performance of the device was characterized. Using the optimized design and an injection frequency of 0.64 Hz showed that the injection process is reproducible (RSD of 1.77%, n = 15). Concentration change experiments using fluorescein as the analyte showed that the device could achieve a lag time as small as 14 s. Finally, to demonstrate the potential uses of this device, the microchip was coupled to a microdialysis probe to monitor a concentration change and sample a fluorescein dye mixture.

  1. Experimental verification of air flow rate measurement for representative isokinetic air sampling in ventilation stacks

    International Nuclear Information System (INIS)

    Okruhlica, P.; Mrtvy, M.; Kopecky, Z.

    2009-01-01

    Nuclear facilities are obliged to monitor their discharge's influence on environment. Main monitored factions in NPP's ventilation stacks are usually noble gasses, particulates and iodine. These factions are monitored in air sampled from ventilation stack by means of sampling rosette and bypass followed with on-line measuring monitors and balance sampling devices with laboratory evaluations. Correct air flow rate measurement and representative iso-kinetic air sampling system is essential for physical correct and metrological accurate evaluation of discharge influence on environment. Pairs of measuring sensors (Anemometer, pressure gauge, thermometer and humidity meter) are symmetrically placed in horizontal projection of stack on positions based on measured air flow velocity distribution characteristic, Analogically diameter of sampling rosette nozzles and their placement in the middle of 6 - 7 annuluses are calculated for assurance of representative iso-kinetic sampling. (authors)

  2. Experimental verification of air flow rate measurement for representative isokinetic air sampling in ventilation stacks

    International Nuclear Information System (INIS)

    Okruhlica, P.; Mrtvy, M.; Kopecky, Z.

    2008-01-01

    Nuclear facilities are obliged to monitor their discharge's influence on environment. Main monitored factions in NPP's ventilation stacks are usually noble gasses, particulates and iodine. These factions are monitored in air sampled from ventilation stack by means of sampling rosette and bypass followed with on-line measuring monitors and balance sampling devices with laboratory evaluations. Correct air flow rate measurement and representative iso-kinetic air sampling system is essential for physical correct and metrological accurate evaluation of discharge influence on environment. Pairs of measuring sensors (Anemometer, pressure gauge, thermometer and humidity meter) are symmetrically placed in horizontal projection of stack on positions based on measured air flow velocity distribution characteristic, Analogically diameter of sampling rosette nozzles and their placement in the middle of 6- 7 annuluses are calculated for assurance of representative iso-kinetic sampling. (authors)

  3. Extension of CFD Codes Application to Two-Phase Flow Safety Problems - Phase 3

    International Nuclear Information System (INIS)

    Bestion, D.; Anglart, H.; Mahaffy, J.; Lucas, D.; Song, C.H.; Scheuerer, M.; Zigh, G.; Andreani, M.; Kasahara, F.; Heitsch, M.; Komen, E.; Moretti, F.; Morii, T.; Muehlbauer, P.; Smith, B.L.; Watanabe, T.

    2014-11-01

    The Writing Group 3 on the extension of CFD to two-phase flow safety problems was formed following recommendations made at the 'Exploratory Meeting of Experts to Define an Action Plan on the Application of Computational Fluid Dynamics (CFD) Codes to Nuclear Reactor Safety Problems' held in Aix-en-Provence, in May 2002. Extension of CFD codes to two-phase flow is significant potentiality for the improvement of safety investigations, by giving some access to smaller scale flow processes which were not explicitly described by present tools. Using such tools as part of a safety demonstration may bring a better understanding of physical situations, more confidence in the results, and an estimation of safety margins. The increasing computer performance allows a more extensive use of 3D modelling of two-phase Thermal hydraulics with finer nodalization. However, models are not as mature as in single phase flow and a lot of work has still to be done on the physical modelling and numerical schemes in such two-phase CFD tools. The Writing Group listed and classified the NRS problems where extension of CFD to two-phase flow may bring real benefit, and classified different modelling approaches in a first report (Bestion et al., 2006). First ideas were reported about the specification and analysis of needs in terms of validation and verification. It was then suggested to focus further activity on a limited number of NRS issues with a high priority and a reasonable chance to be successful in a reasonable period of time. The WG3-step 2 was decided with the following objectives: - selection of a limited number of NRS issues having a high priority and for which two-phase CFD has a reasonable chance to be successful in a reasonable period of time; - identification of the remaining gaps in the existing approaches using two-phase CFD for each selected NRS issue; - review of the existing data base for validation of two-phase CFD application to the selected NRS problems

  4. Toward a mathematical theory of environmental monitoring: the infrequent sampling problem

    International Nuclear Information System (INIS)

    Pimentel, K.D.

    1975-06-01

    Optimal monitoring of pollutants in diffusive environmental media was studied in the contexts of the subproblems of the optimal design and management of environmental monitors for bounds on maximum allowable errors in the estimate of the monitor state or output variables. Concise problem statements were made. Continuous-time finite-dimensional normal mode models for distributed stochastic diffusive pollutant transport were developed. The resultant set of state equations was discretized in time for implementation in the Kalman Filter in the problem of optimal state estimation. The main results of this thesis concern the special class of optimal monitoring problem called the infrequent sampling problem. Extensions to systems including pollutant scavenging and systems with emission or radiation boundary conditions were made. (U.S.)

  5. NASTRAN thermal analyzer: Theory and application including a guide to modeling engineering problems, volume 2. [sample problem library guide

    Science.gov (United States)

    Jackson, C. E., Jr.

    1977-01-01

    A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.

  6. A local search heuristic for the Multi-Commodity k-splittable Maximum Flow Problem

    DEFF Research Database (Denmark)

    Gamst, Mette

    2014-01-01

    , a local search heuristic for solving the problem is proposed. The heuristic is an iterative shortest path procedure on a reduced graph combined with a local search procedure to modify certain path flows and prioritize the different commodities. The heuristic is tested on benchmark instances from...

  7. Evolutionary Hybrid Particle Swarm Optimization Algorithm for Solving NP-Hard No-Wait Flow Shop Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Laxmi A. Bewoor

    2017-10-01

    Full Text Available The no-wait flow shop is a flowshop in which the scheduling of jobs is continuous and simultaneous through all machines without waiting for any consecutive machines. The scheduling of a no-wait flow shop requires finding an appropriate sequence of jobs for scheduling, which in turn reduces total processing time. The classical brute force method for finding the probabilities of scheduling for improving the utilization of resources may become trapped in local optima, and this problem can hence be observed as a typical NP-hard combinatorial optimization problem that requires finding a near optimal solution with heuristic and metaheuristic techniques. This paper proposes an effective hybrid Particle Swarm Optimization (PSO metaheuristic algorithm for solving no-wait flow shop scheduling problems with the objective of minimizing the total flow time of jobs. This Proposed Hybrid Particle Swarm Optimization (PHPSO algorithm presents a solution by the random key representation rule for converting the continuous position information values of particles to a discrete job permutation. The proposed algorithm initializes population efficiently with the Nawaz-Enscore-Ham (NEH heuristic technique and uses an evolutionary search guided by the mechanism of PSO, as well as simulated annealing based on a local neighborhood search to avoid getting stuck in local optima and to provide the appropriate balance of global exploration and local exploitation. Extensive computational experiments are carried out based on Taillard’s benchmark suite. Computational results and comparisons with existing metaheuristics show that the PHPSO algorithm outperforms the existing methods in terms of quality search and robustness for the problem considered. The improvement in solution quality is confirmed by statistical tests of significance.

  8. Numerical Study on Several Stabilized Finite Element Methods for the Steady Incompressible Flow Problem with Damping

    Directory of Open Access Journals (Sweden)

    Jilian Wu

    2013-01-01

    Full Text Available We discuss several stabilized finite element methods, which are penalty, regular, multiscale enrichment, and local Gauss integration method, for the steady incompressible flow problem with damping based on the lowest equal-order finite element space pair. Then we give the numerical comparisons between them in three numerical examples which show that the local Gauss integration method has good stability, efficiency, and accuracy properties and it is better than the others for the steady incompressible flow problem with damping on the whole. However, to our surprise, the regular method spends less CPU-time and has better accuracy properties by using Crout solver.

  9. Exact partial solution to the steady-state, compressible fluid flow problems of jet formation and jet penetration

    International Nuclear Information System (INIS)

    Karpp, R.R.

    1980-10-01

    This report treats analytically the problem of the symmetric impact of two compressible fluid streams. The flow is assumed to be steady, plane, inviscid, and subsonic and that the compressible fluid is of the Chaplygin (tangent gas) type. In the analysis, the governing equations are first transformed to the hodograph plane where an exact, closed-form solution is obtained by standard techniques. The distributions of fluid properties along the plane of symmetry as well as the shapes of the boundary streamlines are exactly determined by transforming the solution back to the physical plane. The problem of a compressible fluid jet penetrating into an infinite target of similar material is also exactly solved by considering a limiting case of this solution. This new compressible flow solution reduces to the classical result of incompressible flow theory when the sound speed of the fluid is allowed to approach infinity. Several illustrations of the differences between compressible and incompressible flows of the type considered are presented

  10. A Data Flow Model to Solve the Data Distribution Changing Problem in Machine Learning

    Directory of Open Access Journals (Sweden)

    Shang Bo-Wen

    2016-01-01

    Full Text Available Continuous prediction is widely used in broad communities spreading from social to business and the machine learning method is an important method in this problem.When we use the machine learning method to predict a problem. We use the data in the training set to fit the model and estimate the distribution of data in the test set.But when we use machine learning to do the continuous prediction we get new data as time goes by and use the data to predict the future data, there may be a problem. As the size of the data set increasing over time, the distribution changes and there will be many garbage data in the training set.We should remove the garbage data as it reduces the accuracy of the prediction. The main contribution of this article is using the new data to detect the timeliness of historical data and remove the garbage data.We build a data flow model to describe how the data flow among the test set, training set, validation set and the garbage set and improve the accuracy of prediction. As the change of the data set, the best machine learning model will change.We design a hybrid voting algorithm to fit the data set better that uses seven machine learning models predicting the same problem and uses the validation set putting different weights on the learning models to give better model more weights. Experimental results show that, when the distribution of the data set changes over time, our time flow model can remove most of the garbage data and get a better result than the traditional method that adds all the data to the data set; our hybrid voting algorithm has a better prediction result than the average accuracy of other predict models

  11. Experimental procedure for the determination of counting efficiency and sampling flow rate of a grab-sampling working level meter

    International Nuclear Information System (INIS)

    Grenier, M.; Bigu, J.

    1982-07-01

    The calibration procedures used for a working level meter (WLM) of the grab-sampling type are presented in detail. The WLM tested is a Pylon WL-1000C working level meter and it was calibrated for radon/thoron daughter counting efficiency (E), for sampling pump flow rate (Q) and other variables of interest. For the instrument calibrated at the Elliot Lake Laboratory, E was 0.22 +- 0.01 while Q was 4.50 +- 0.01 L/min

  12. Recent bibliography on analytical and sampling problems of a PWR primary coolant

    International Nuclear Information System (INIS)

    Illy, H.

    1980-07-01

    An extensive bibliography on the problems of analysis and sampling of the primary cooling water of PWRs is presented. The aim was to collect the analytical methods for dissolved gases. The sampling and preparation are also taken into account. last 8-10 years is included. The bibliography is arranged into alphabetical order by topics. The most important topics are as follows: boric acid, gas analysis, hydrogen isotopes, iodine, noble gases, radiation monitoring, sampling and preparation, water chemistry. (R.J.)

  13. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.; Wheeler, M. F.; Hoteit, Ibrahim

    2013-01-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known

  14. Virtual sampling in variational processing of Monte Carlo simulation in a deep neutron penetration problem

    International Nuclear Information System (INIS)

    Allagi, Mabruk O.; Lewins, Jeffery D.

    1999-01-01

    In a further study of virtually processed Monte Carlo estimates in neutron transport, a shielding problem has been studied. The use of virtual sampling to estimate the importance function at a certain point in the phase space depends on the presence of neutrons from the real source at that point. But in deep penetration problems, not many neutrons will reach regions far away from the source. In order to overcome this problem, two suggestions are considered: (1) virtual sampling is used as far as the real neutrons can reach, then fictitious sampling is introduced for the remaining regions, distributed in all the regions, or (2) only one fictitious source is placed where the real neutrons almost terminate and then virtual sampling is used in the same way as for the real source. Variational processing is again found to improve the Monte Carlo estimates, being best when using one fictitious source in the far regions with virtual sampling (option 2). When fictitious sources are used to estimate the importances in regions far away from the source, some optimization has to be performed for the proportion of fictitious to real sources, weighted against accuracy and computational costs. It has been found in this study that the optimum number of cells to be treated by fictitious sampling is problem dependent, but as a rule of thumb, fictitious sampling should be employed in regions where the number of neutrons from the real source fall below a specified limit for good statistics

  15. A stochastic programming approach to manufacturing flow control

    OpenAIRE

    Haurie, Alain; Moresino, Francesco

    2012-01-01

    This paper proposes and tests an approximation of the solution of a class of piecewise deterministic control problems, typically used in the modeling of manufacturing flow processes. This approximation uses a stochastic programming approach on a suitably discretized and sampled system. The method proceeds through two stages: (i) the Hamilton-Jacobi-Bellman (HJB) dynamic programming equations for the finite horizon continuous time stochastic control problem are discretized over a set of sample...

  16. Dependence of fracture mechanical and fluid flow properties on fracture roughness and sample size

    International Nuclear Information System (INIS)

    Tsang, Y.W.; Witherspoon, P.A.

    1983-01-01

    A parameter study has been carried out to investigate the interdependence of mechanical and fluid flow properties of fractures with fracture roughness and sample size. A rough fracture can be defined mathematically in terms of its aperture density distribution. Correlations were found between the shapes of the aperture density distribution function and the specific fractures of the stress-strain behavior and fluid flow characteristics. Well-matched fractures had peaked aperture distributions that resulted in very nonlinear stress-strain behavior. With an increasing degree of mismatching between the top and bottom of a fracture, the aperture density distribution broadened and the nonlinearity of the stress-strain behavior became less accentuated. The different aperture density distributions also gave rise to qualitatively different fluid flow behavior. Findings from this investigation make it possible to estimate the stress-strain and fluid flow behavior when the roughness characteristics of the fracture are known and, conversely, to estimate the fracture roughness from an examination of the hydraulic and mechanical data. Results from this study showed that both the mechanical and hydraulic properties of the fracture are controlled by the large-scale roughness of the joint surface. This suggests that when the stress-flow behavior of a fracture is being investigated, the size of the rock sample should be larger than the typical wave length of the roughness undulations

  17. Applications of high-resolution spatial discretization scheme and Jacobian-free Newton–Krylov method in two-phase flow problems

    International Nuclear Information System (INIS)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2015-01-01

    Highlights: • Using high-resolution spatial scheme in solving two-phase flow problems. • Fully implicit time integrations scheme. • Jacobian-free Newton–Krylov method. • Analytical solution for two-phase water faucet problem. - Abstract: The majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many nuclear thermal–hydraulics applications, it is desirable to use higher-order numerical schemes to reduce numerical errors. High-resolution spatial discretization schemes provide high order spatial accuracy in smooth regions and capture sharp spatial discontinuity without nonphysical spatial oscillations. In this work, we adapted an existing high-resolution spatial discretization scheme on staggered grids in two-phase flow applications. Fully implicit time integration schemes were also implemented to reduce numerical errors from operator-splitting types of time integration schemes. The resulting nonlinear system has been successfully solved using the Jacobian-free Newton–Krylov (JFNK) method. The high-resolution spatial discretization and high-order fully implicit time integration numerical schemes were tested and numerically verified for several two-phase test problems, including a two-phase advection problem, a two-phase advection with phase appearance/disappearance problem, and the water faucet problem. Numerical results clearly demonstrated the advantages of using such high-resolution spatial and high-order temporal numerical schemes to significantly reduce numerical diffusion and therefore improve accuracy. Our study also demonstrated that the JFNK method is stable and robust in solving two-phase flow problems, even when phase appearance/disappearance exists

  18. Procrastination, Flow, and Academic Performance in Real Time Using the Experience Sampling Method.

    Science.gov (United States)

    Sumaya, Isabel C; Darling, Emily

    2018-01-01

    The authors' aim was to first provide an alternative methodology in the assessment of procrastination and flow that would not reply on retrospective or prospective self-reports. Using real-time assessment of both procrastination and flow, the authors investigated how these factors impact academic performance by using the Experience Sampling Method. They assessed flow by measuring student self-reported skill versus challenge, and procrastination by measuring the days to completion of an assignment. Procrastination and flow were measured for six days before a writing assignment due date while students (n = 14) were enrolled in a research methods course. Regardless of status of flow, both the nonflow and flow groups showed high levels of procrastination. Students who experienced flow as they worked on their paper, in real time, earned significantly higher grades (M = 3.05 ± 0.30: an average grade of B) as compared with the nonflow group (M = 1.16 ± 0.33: an average grade of D; p = .007). Additionally, students experiencing flow were more accurate in predicting their grade (difference scores, flow M = 0.12 ± 0.33 vs. nonflow M = 1.39 ± 0.29; p = .015). Students in the nonflow group were nearly a grade and a half off in their prediction of their grade on the paper. To the authors' knowledge, the study is the first to provide experimental evidence showing differences in academic performance between students experiencing flow and nonflow students.

  19. Problem Gambling in a Sample of Older Adult Casino Gamblers.

    Science.gov (United States)

    van der Maas, Mark; Mann, Robert E; McCready, John; Matheson, Flora I; Turner, Nigel E; Hamilton, Hayley A; Schrans, Tracy; Ialomiteanu, Anca

    2017-01-01

    As older adults continue to make up a greater proportion of the Canadian population, it becomes more important to understand the implications that their leisure activities have for their physical and mental health. Gambling, in particular, is a form of leisure that is becoming more widely available and has important implications for the mental health and financial well-being of older adults. This study examines a large sample (2103) of casino-going Ontarian adults over the age of 55 and identifies those features of their gambling participation that are associated with problem gambling. Logistic regression analysis is used to analyze the data. Focusing on types of gambling participated in and motivations for visiting the casino, this study finds that several forms of gambling and motivations to gamble are associated with greater risk of problem gambling. It also finds that some motivations are associated with lower risk of problem gambling. The findings of this study have implications related to gambling availability within an aging population.

  20. The Granular Blasius Problem: High inertial number granular flows

    Science.gov (United States)

    Tsang, Jonathan; Dalziel, Stuart; Vriend, Nathalie

    2017-11-01

    The classical Blasius problem considers the formation of a boundary layer through the change at x = 0 from a free-slip to a no-slip boundary beneath an otherwise steady uniform flow. Discrete particle model (DPM) simulations of granular gravity currents show that a similar phenomenon exists for a steady flow over a uniformly sloped surface that is smooth upstream (allowing slip) but rough downstream (imposing a no-slip condition). The boundary layer is a region of high shear rate and therefore high inertial number I; its dynamics are governed by the asymptotic behaviour of the granular rheology as I -> ∞ . The μ(I) rheology asserts that dμ / dI = O(1 /I2) as I -> ∞ , but current experimental evidence is insufficient to confirm this. We show that `generalised μ(I) rheologies', with different behaviours as I -> ∞ , all permit the formation of a boundary layer. We give approximate solutions for the velocity profile under each rheology. The change in boundary condition considered here mimics more complex topography in which shear stress increases in the streamwise direction (e.g. a curved slope). Such a system would be of interest in avalanche modelling. EPSRC studentship (Tsang) and Royal Society Dorothy Hodgkin Fellowship (Vriend).

  1. New sample carrier systems for thermogravimetric analysis under forced flow conditions and their influence on microkinetic results.

    Science.gov (United States)

    Seibel, C; Fieback, T M

    2015-09-01

    For thermogravimetric analysis, it has been shown that, depending on the type of sample container, different kinetic results could be obtained despite regarding the same reaction under constant conditions. This is due to limiting macrokinetic effects which are strongly dependant on the type of sample carrying system. This prompted the need for sample containers which deliver results minimally limited by diffusive mass transport. In this way, two container systems were developed, both characterized by a forced flow stream through a solid, porous bed: one from bottom to top (counter-current flow) and one from top to bottom (co-current flow). Optical test measurements were performed, the results indicating that reaction proceedings are almost fully independent of the geometrical shape of the sample containers. The Boudouard reaction was investigated with a standard crucible and the new developed systems; the reaction rates determined differed significantly, up to a factor of 6.2 at 1373 K.

  2. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    Science.gov (United States)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described

  3. Applications of Asymptotic Sampling on High Dimensional Structural Dynamic Problems

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian

    2011-01-01

    The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has consid...... dimensional reliability problems in structural dynamics.......The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has...... is minimized. Next, the method is applied on different cases of linear and nonlinear systems with a large number of random variables representing the dynamic excitation. The results show that asymptotic sampling is capable of providing good approximations of low failure probability events for very high...

  4. Two problems in multiphase biological flows: Blood flow and particulate transport in microvascular network, and pseudopod-driven motility of amoeboid cells

    Science.gov (United States)

    Bagchi, Prosenjit

    2016-11-01

    In this talk, two problems in multiphase biological flows will be discussed. The first is the direct numerical simulation of whole blood and drug particulates in microvascular networks. Blood in microcirculation behaves as a dense suspension of heterogeneous cells. The erythrocytes are extremely deformable, while inactivated platelets and leukocytes are nearly rigid. A significant progress has been made in recent years in modeling blood as a dense cellular suspension. However, many of these studies considered the blood flow in simple geometry, e.g., straight tubes of uniform cross-section. In contrast, the architecture of a microvascular network is very complex with bifurcating, merging and winding vessels, posing a further challenge to numerical modeling. We have developed an immersed-boundary-based method that can consider blood cell flow in physiologically realistic and complex microvascular network. In addition to addressing many physiological issues related to network hemodynamics, this tool can be used to optimize the transport properties of drug particulates for effective organ-specific delivery. Our second problem is pseudopod-driven motility as often observed in metastatic cancer cells and other amoeboid cells. We have developed a multiscale hydrodynamic model to simulate such motility. We study the effect of cell stiffness on motility as the former has been considered as a biomarker for metastatic potential. Funded by the National Science Foundation.

  5. New Mathematical Model and Algorithm for Economic Lot Scheduling Problem in Flexible Flow Shop

    Directory of Open Access Journals (Sweden)

    H. Zohali

    2018-03-01

    Full Text Available This paper addresses the lot sizing and scheduling problem for a number of products in flexible flow shop with identical parallel machines. The production stages are in series, while separated by finite intermediate buffers. The objective is to minimize the sum of setup and inventory holding costs per unit of time. The available mathematical model of this problem in the literature suffers from huge complexity in terms of size and computation. In this paper, a new mixed integer linear program is developed for delay with the huge dimentions of the problem. Also, a new meta heuristic algorithm is developed for the problem. The results of the numerical experiments represent a significant advantage of the proposed model and algorithm compared with the available models and algorithms in the literature.

  6. Flow cytometry and integrated imaging

    Directory of Open Access Journals (Sweden)

    V. Kachel

    2000-06-01

    Full Text Available It is a serious problem to relate the results of a flow cytometric analysis of a marine sample to different species. Images of particles selectively triggered by the flow cytometric analysis and picked out from the flowing stream give a valuable additional information on the analyzed organisms. The technical principles and problems of triggered imaging in flow are discussed, as well as the positioning of the particles in the plane of focus, freezing the motion of the quickly moving objects and what kinds of light sources are suitable for pulsed illumination. The images have to be stored either by film or electronically. The features of camera targets and the memory requirements for storing the image data and the conditions for the triggering device are shown. A brief explanation of the features of three realized flow cytometric imaging (FCI systems is given: the Macro Flow Planktometer built within the EUROMAR MAROPT project, the Imaging Module of the European Plankton Analysis System, supported by the MAST II EurOPA project and the most recently developed FLUVO VI universal flow cytometer including HBO 100- and laser excitation for fluorescence and scatter, Coulter sizing as well as bright field and and phase contrast FCI.

  7. Joint Model and Parameter Dimension Reduction for Bayesian Inversion Applied to an Ice Sheet Flow Problem

    Science.gov (United States)

    Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.

    2016-12-01

    Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete

  8. Air sampling to assess potential generation of aerosolized viable bacteria during flow cytometric analysis of unfixed bacterial suspensions

    Science.gov (United States)

    Carson, Christine F; Inglis, Timothy JJ

    2018-01-01

    This study investigated aerosolized viable bacteria in a university research laboratory during operation of an acoustic-assisted flow cytometer for antimicrobial susceptibility testing by sampling room air before, during and after flow cytometer use. The aim was to assess the risk associated with use of an acoustic-assisted flow cytometer analyzing unfixed bacterial suspensions. Air sampling in a nearby clinical laboratory was conducted during the same period to provide context for the existing background of microorganisms that would be detected in the air. The three species of bacteria undergoing analysis by flow cytometer in the research laboratory were Klebsiella pneumoniae, Burkholderia thailandensis and Streptococcus pneumoniae. None of these was detected from multiple 1000 L air samples acquired in the research laboratory environment. The main cultured bacteria in both locations were skin commensal and environmental bacteria, presumed to have been disturbed or dispersed in laboratory air by personnel movements during routine laboratory activities. The concentrations of bacteria detected in research laboratory air samples were reduced after interventional cleaning measures were introduced and were lower than those in the diagnostic clinical microbiology laboratory. We conclude that our flow cytometric analyses of unfixed suspensions of K. pneumoniae, B. thailandensis and S. pneumoniae do not pose a risk to cytometer operators or other personnel in the laboratory but caution against extrapolation of our results to other bacteria and/or different flow cytometric experimental procedures. PMID:29608197

  9. Improving Creative Problem-Solving in a Sample of Third Culture Kids

    Science.gov (United States)

    Lee, Young Ju; Bain, Sherry K.; McCallum, R. Steve

    2007-01-01

    We investigated the effects of divergent thinking training (with explicit instruction) on problem-solving tasks in a sample of Third Culture Kids (Useem and Downie, 1976). We were specifically interested in whether the children's originality and fluency in responding increased following instruction, not only on classroom-based worksheets and the…

  10. A multi-objective optimization problem for multi-state series-parallel systems: A two-stage flow-shop manufacturing system

    International Nuclear Information System (INIS)

    Azadeh, A.; Maleki Shoja, B.; Ghanei, S.; Sheikhalishahi, M.

    2015-01-01

    This research investigates a redundancy-scheduling optimization problem for a multi-state series parallel system. The system is a flow shop manufacturing system with multi-state machines. Each manufacturing machine may have different performance rates including perfect performance, decreased performance and complete failure. Moreover, warm standby redundancy is considered for the redundancy allocation problem. Three objectives are considered for the problem: (1) minimizing system purchasing cost, (2) minimizing makespan, and (3) maximizing system reliability. Universal generating function is employed to evaluate system performance and overall reliability of the system. Since the problem is in the NP-hard class of combinatorial problems, genetic algorithm (GA) is used to find optimal/near optimal solutions. Different test problems are generated to evaluate the effectiveness and efficiency of proposed approach and compared to simulated annealing optimization method. The results show the proposed approach is capable of finding optimal/near optimal solution within a very reasonable time. - Highlights: • A redundancy-scheduling optimization problem for a multi-state series parallel system. • A flow shop with multi-state machines and warm standby redundancy. • Objectives are to optimize system purchasing cost, makespan and reliability. • Different test problems are generated and evaluated by a unique genetic algorithm. • It locates optimal/near optimal solution within a very reasonable time

  11. Possibilities of mathematical models in solving flow problems in environmental protection and water architecture

    Energy Technology Data Exchange (ETDEWEB)

    1979-01-01

    The booklet presents the full text of 13 contributions to a Colloquium held at Karlsruhe in Sept. 1979. The main topics of the papers are the evaluation of mathematical models to solve flow problems in tide water, seas, rivers, groundwater and in the earth atmosphere. See further hints under relevant topics.

  12. Symptoms and problems in a nationally representative sample of advanced cancer patients

    DEFF Research Database (Denmark)

    Johnsen, Anna Thit; Petersen, Morten Aagaard; Pedersen, Lise

    2009-01-01

    Little is known about the need for palliative care among advanced cancer patients who are not in specialist palliative care. The purpose was to identify prevalence and predictors of symptoms and problems in a nationally representative sample of Danish advanced cancer patients. Patients with cancer...... or not were associated with several symptoms and problems. This is probably the first nationally representative study of its kind. It shows that advanced cancer patients in Denmark have symptoms and problems that deserve attention and that some patient groups are especially at risk....... predictors. In total, 977 (60%) patients participated. The most frequent symptoms/problems were fatigue (57%; severe 22%) followed by reduced role function, insomnia and pain. Age, cancer stage, primary tumour, type of department, marital status and whether the patient had recently been hospitalized...

  13. Some applications of the moving finite element method to fluid flow and related problems

    International Nuclear Information System (INIS)

    Berry, R.A.; Williamson, R.L.

    1983-01-01

    The Moving Finite Element (MFE) method is applied to one-dimensional, nonlinear wave type partial differential equations which are characteristics of fluid dynamic and related flow phenomena problems. These equation systems tend to be difficult to solve because their transient solutions exhibit a spacial stiffness property, i.e., they represent physical phenomena of widely disparate length scales which must be resolved simultaneously. With the MFE method the node points automatically move (in theory) to optimal locations giving a much better approximation than can be obtained with fixed mesh methods (with a reasonable number of nodes) and with significantly reduced artificial viscosity or diffusion content. Three applications are considered. In order of increasing complexity they are: (1) a thermal quench problem, (2) an underwater explosion problem, and (3) a gas dynamics shock tube problem. The results are briefly shown

  14. Comparison of AI techniques to solve combined economic emission dispatch problem with line flow constraints

    Energy Technology Data Exchange (ETDEWEB)

    Jacob Raglend, I. [School of Electrical Sciences, Noorul Islam University, Kumaracoil 629 180 (India); Veeravalli, Sowjanya; Sailaja, Kasanur; Sudheera, B. [School of Electrical Sciences, Vellore Institute of Technology, Vellore 632 004 (India); Kothari, D.P. [FNAE, FNASC, SMIEEE, Vellore Institute of Technology University, Vellore 632 014 (India)

    2010-07-15

    A comparative study has been made on the solutions obtained using combined economic emission dispatch (CEED) problem considering line flow constraints using different intelligent techniques for the regulated power system to ensure a practical, economical and secure generation schedule. The objective of the paper is to minimize the total production cost of the power generation. Economic load dispatch (ELD) and economic emission dispatch (EED) have been applied to obtain optimal fuel cost of generating units. Combined economic emission dispatch (CEED) is obtained by considering both the economic and emission objectives. This bi-objective CEED problem is converted into single objective function using price penalty factor approach. In this paper, intelligent techniques such as genetic algorithm (GA), evolutionary programming (EP), particle swarm optimization (PSO), differential evolution (DE) are applied to obtain CEED solutions for the IEEE 30-bus system and 15-unit system. This proposed algorithm introduces an efficient CEED approach that obtains the minimum operating cost satisfying unit, emission and network constraints. The proposed algorithm has been tested on two sample systems viz the IEEE 30-bus system and a 15-unit system. The results obtained by the various artificial intelligent techniques are compared with respect to the solution time, total production cost and convergence criteria. The solutions obtained are quite encouraging and useful in the economic emission environment. The algorithm and simulation are carried out using Matlab software. (author)

  15. Comparison of AI techniques to solve combined economic emission dispatch problem with line flow constraints

    International Nuclear Information System (INIS)

    Jacob Raglend, I.; Veeravalli, Sowjanya; Sailaja, Kasanur; Sudheera, B.; Kothari, D.P.

    2010-01-01

    A comparative study has been made on the solutions obtained using combined economic emission dispatch (CEED) problem considering line flow constraints using different intelligent techniques for the regulated power system to ensure a practical, economical and secure generation schedule. The objective of the paper is to minimize the total production cost of the power generation. Economic load dispatch (ELD) and economic emission dispatch (EED) have been applied to obtain optimal fuel cost of generating units. Combined economic emission dispatch (CEED) is obtained by considering both the economic and emission objectives. This bi-objective CEED problem is converted into single objective function using price penalty factor approach. In this paper, intelligent techniques such as genetic algorithm (GA), evolutionary programming (EP), particle swarm optimization (PSO), differential evolution (DE) are applied to obtain CEED solutions for the IEEE 30-bus system and 15-unit system. This proposed algorithm introduces an efficient CEED approach that obtains the minimum operating cost satisfying unit, emission and network constraints. The proposed algorithm has been tested on two sample systems viz the IEEE 30-bus system and a 15-unit system. The results obtained by the various artificial intelligent techniques are compared with respect to the solution time, total production cost and convergence criteria. The solutions obtained are quite encouraging and useful in the economic emission environment. The algorithm and simulation are carried out using Matlab software. (author)

  16. Inverse problems with non-trivial priors: efficient solution through sequential Gibbs sampling

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Cordua, Knud Skou; Mosegaard, Klaus

    2012-01-01

    Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample solutions to non-linear inverse problems. In principle, these methods allow incorporation of prior information of arbitrary complexity. If an analytical closed form description of the prior...... is available, which is the case when the prior can be described by a multidimensional Gaussian distribution, such prior information can easily be considered. In reality, prior information is often more complex than can be described by the Gaussian model, and no closed form expression of the prior can be given....... We propose an algorithm, called sequential Gibbs sampling, allowing the Metropolis algorithm to efficiently incorporate complex priors into the solution of an inverse problem, also for the case where no closed form description of the prior exists. First, we lay out the theoretical background...

  17. MHD and heat transfer benchmark problems for liquid metal flow in rectangular ducts

    International Nuclear Information System (INIS)

    Sidorenkov, S.I.; Hua, T.Q.; Araseki, H.

    1994-01-01

    Liquid metal cooling systems of a self-cooled blanket in a tokamak reactor will likely include channels of rectangular cross section where liquid metal is circulated in the presence of strong magnetic fields. MHD pressure drop, velocity distribution and heat transfer characteristics are important issues in the engineering design considerations. Computer codes for the reliable solution of three-dimensional MHD flow problems are needed for fusion relevant conditions. Argonne National Laboratory and The Efremov Institute have jointly defined several benchmark problems for code validation. The problems, described in this paper, are based on two series of rectangular duct experiments conducted at ANL; one of the series is a joint ANL/Efremov experiment. The geometries consist of variation of aspect ratio and wall thickness (thus wall conductance ratio). The transverse magnetic fields are uniform and nonuniform in the axial direction

  18. Stable Galerkin versus equal-order Galerkin least-squares elements for the stokes flow problem

    International Nuclear Information System (INIS)

    Franca, L.P.; Frey, S.L.; Sampaio, R.

    1989-11-01

    Numerical experiments are performed for the stokes flow problem employing a stable Galerkin method and a Galerkin/Least-squares method with equal-order elements. Error estimates for the methods tested herein are reviewed. The numerical results presented attest the good stability properties of all methods examined herein. (A.C.A.S.) [pt

  19. Rapid Salmonella detection in experimentally inoculated equine faecal and veterinary hospital environmental samples using commercially available lateral flow immunoassays.

    Science.gov (United States)

    Burgess, B A; Noyes, N R; Bolte, D S; Hyatt, D R; van Metre, D C; Morley, P S

    2015-01-01

    Salmonella enterica is the most commonly reported cause of outbreaks of nosocomial infections in large animal veterinary teaching hospitals and the closure of equine hospitals. Rapid detection may facilitate effective control practices in equine populations. Shipping and laboratory testing typically require ≥48 h to obtain results. Lateral flow immunoassays developed for use in food-safety microbiology provide an alternative that has not been evaluated for use with faeces or environmental samples. We aimed to identify enrichment methods that would allow commercially available rapid Salmonella detection systems (lateral flow immunoassays) to be used in clinical practice with equine faecal and environmental samples, providing test results in 18-24 h. In vitro experiment. Equine faecal and environmental samples were inoculated with known quantities of S. enterica serotype Typhimurium and cultured using 2 different enrichment techniques for faeces and 4 enrichment techniques for environmental samples. Samples were tested blindly using 2 different lateral flow immunoassays and plated on agar media for confirmatory testing. In general, commercial lateral flow immunoassays resulted in fewer false-negative test results with enrichment of 1 g faecal samples in tetrathionate for 18 h, while all environmental sample enrichment techniques resulted in similar detection rates. The limit of detection from spiked samples, ∼4 colony-forming units/g, was similar for all methods evaluated. The lateral flow immunoassays evaluated could reliably detect S. enterica within 18 h, indicating that they may be useful for rapid point-of-care testing in equine practice applications. Additional evaluation is needed using samples from naturally infected cases and the environment to gain an accurate estimate of test sensitivity and specificity and to substantiate further the true value of these tests in clinical practice. © 2014 EVJ Ltd.

  20. Asymptotic Analysis of SPTA-Based Algorithms for No-Wait Flow Shop Scheduling Problem with Release Dates

    Directory of Open Access Journals (Sweden)

    Tao Ren

    2014-01-01

    Full Text Available We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.

  1. Asymptotic analysis of SPTA-based algorithms for no-wait flow shop scheduling problem with release dates.

    Science.gov (United States)

    Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang

    2014-01-01

    We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.

  2. Advanced Curation: Solving Current and Future Sample Return Problems

    Science.gov (United States)

    Fries, M.; Calaway, M.; Evans, C.; McCubbin, F.

    2015-01-01

    Advanced Curation is a wide-ranging and comprehensive research and development effort at NASA Johnson Space Center that identifies and remediates sample related issues. For current collections, Advanced Curation investigates new cleaning, verification, and analytical techniques to assess their suitability for improving curation processes. Specific needs are also assessed for future sample return missions. For each need, a written plan is drawn up to achieve the requirement. The plan draws while upon current Curation practices, input from Curators, the analytical expertise of the Astromaterials Research and Exploration Science (ARES) team, and suitable standards maintained by ISO, IEST, NIST and other institutions. Additionally, new technologies are adopted on the bases of need and availability. Implementation plans are tested using customized trial programs with statistically robust courses of measurement, and are iterated if necessary until an implementable protocol is established. Upcoming and potential NASA missions such as OSIRIS-REx, the Asteroid Retrieval Mission (ARM), sample return missions in the New Frontiers program, and Mars sample return (MSR) all feature new difficulties and specialized sample handling requirements. The Mars 2020 mission in particular poses a suite of challenges since the mission will cache martian samples for possible return to Earth. In anticipation of future MSR, the following problems are among those under investigation: What is the most efficient means to achieve the less than 1.0 ng/sq cm total organic carbon (TOC) cleanliness required for all sample handling hardware? How do we maintain and verify cleanliness at this level? The Mars 2020 Organic Contamination Panel (OCP) predicts that organic carbon, if present, will be present at the "one to tens" of ppb level in martian near-surface samples. The same samples will likely contain wt% perchlorate salts, or approximately 1,000,000x as much perchlorate oxidizer as organic carbon

  3. Use of a genetic algorithm to solve two-fluid flow problems on an NCUBE multiprocessor computer

    International Nuclear Information System (INIS)

    Pryor, R.J.; Cline, D.D.

    1992-01-01

    A method of solving the two-phase fluid flow equations using a genetic algorithm on a NCUBE multiprocessor computer is presented. The topics discussed are the two-phase flow equations, the genetic representation of the unknowns, the fitness function, the genetic operators, and the implementation of the algorithm on the NCUBE computer. The efficiency of the implementation is investigated using a pipe blowdown problem. Effects of varying the genetic parameters and the number of processors are presented

  4. Use of a genetic agorithm to solve two-fluid flow problems on an NCUBE multiprocessor computer

    International Nuclear Information System (INIS)

    Pryor, R.J.; Cline, D.D.

    1993-01-01

    A method of solving the two-phases fluid flow equations using a genetic algorithm on a NCUBE multiprocessor computer is presented. The topics discussed are the two-phase flow equations, the genetic representation of the unkowns, the fitness function, the genetic operators, and the implementation of the algorithm on the NCUBE computer. The efficiency of the implementation is investigated using a pipe blowdown problem. Effects of varying the genetic parameters and the number of processors are presented. (orig.)

  5. Simultaneous Sampling of Flow and Odorants by Crustaceans can Aid Searches within a Turbulent Plume

    Directory of Open Access Journals (Sweden)

    Swapnil Pravin

    2013-12-01

    Full Text Available Crustaceans such as crabs, lobsters and crayfish use dispersing odorant molecules to determine the location of predators, prey, potential mates and habitat. Odorant molecules diffuse in turbulent flows and are sensed by the olfactory organs of these animals, often using a flicking motion of their antennules. These antennules contain both chemosensory and mechanosensory sensilla, which enable them to detect both flow and odorants during a flick. To determine how simultaneous flow and odorant sampling can aid in search behavior, a 3-dimensional numerical model for the near-bed flow environment was created. A stream of odorant concentration was released into the flow creating a turbulent plume, and both temporally and spatially fluctuating velocity and odorant concentration were quantified. The plume characteristics show close resemblance to experimental measurements within a large laboratory flume. Results show that mean odorant concentration and it’s intermittency, computed as dc/dt, increase towards the plume source, but the temporal and spatial rate of this increase is slow and suggests that long measurement times would be necessary to be useful for chemosensory guidance. Odorant fluxes measured transverse to the mean flow direction, quantified as the product of the instantaneous fluctuation in concentration and velocity, v’c’, do show statistically distinct magnitude and directional information on either side of a plume centerline over integration times of <0.5 s. Aquatic animals typically have neural responses to odorant and velocity fields at rates between 50 and 500 ms, suggesting this simultaneous sampling of both flow and concentration in a turbulent plume can aid in source tracking on timescales relevant to aquatic animals.

  6. Automatic sampling technology in wide belt conveyor with big volume of coal flow

    Energy Technology Data Exchange (ETDEWEB)

    Liu, J. [China Coal Research Institute, Beijing (China)

    2008-06-15

    The principle and technique of sampling in a wide belt conveyor with high coal flow was studied. The design method of the technology, the key parameters, the collection efficiency, the mechanical unit, power supply and control system and worksite facility were ascertained. 3 refs., 5 figs.

  7. Iterative methods for the detection of Hopf bifurcations in finite element discretisation of incompressible flow problems

    International Nuclear Information System (INIS)

    Cliffe, K.A.; Garratt, T.J.; Spence, A.

    1992-03-01

    This paper is concerned with the problem of computing a small number of eigenvalues of large sparse generalised eigenvalue problems arising from mixed finite element discretisations of time dependent equations modelling viscous incompressible flow. The eigenvalues of importance are those with smallest real part and can be used in a scheme to determine the stability of steady state solutions and to detect Hopf bifurcations. We introduce a modified Cayley transform of the generalised eigenvalue problem which overcomes a drawback of the usual Cayley transform applied to such problems. Standard iterative methods are then applied to the transformed eigenvalue problem to compute approximations to the eigenvalue of smallest real part. Numerical experiments are performed using a model of double diffusive convection. (author)

  8. The association between childhood maltreatment and gambling problems in a community sample of adult men and women.

    Science.gov (United States)

    Hodgins, David C; Schopflocher, Don P; el-Guebaly, Nady; Casey, David M; Smith, Garry J; Williams, Robert J; Wood, Robert T

    2010-09-01

    The association between childhood maltreatment and gambling problems was examined in a community sample of men and women (N = 1,372). As hypothesized, individuals with gambling problems reported greater childhood maltreatment than individuals without gambling problems. Childhood maltreatment predicted severity of gambling problems and frequency of gambling even when other individual and social factors were controlled including symptoms of alcohol and other drug use disorders, family environment, psychological distress, and symptoms of antisocial disorder. In contrast to findings in treatment-seeking samples, women with gambling problems did not report greater maltreatment than men with gambling problems. These results underscore the need for both increased prevention of childhood maltreatment and increased sensitivity towards trauma issues in gambling treatment programs for men and women.

  9. On the modelling of compressible inviscid flow problems using AUSM schemes

    Directory of Open Access Journals (Sweden)

    Hajžman M.

    2007-11-01

    Full Text Available During last decades, upwind schemes have become a popular method in the field of computational fluid dynamics. Although they are only first order accurate, AUSM (Advection Upstream Splitting Method schemes proved to be well suited for modelling of compressible flows due to their robustness and ability of capturing shock discontinuities. In this paper, we review the composition of the AUSM flux-vector splitting scheme and its improved version noted AUSM+, proposed by Liou, for the solution of the Euler equations. Mach number splitting functions operating with values from adjacent cells are used to determine numerical convective fluxes and pressure splitting is used for the evaluation of numerical pressure fluxes. Both versions of the AUSM scheme are applied for solving some test problems such as one-dimensional shock tube problem and three dimensional GAMM channel. Features of the schemes are discussed in comparison with some explicit central schemes of the first order accuracy (Lax-Friedrichs and of the second order accuracy (MacCormack.

  10. Simulation of rarefied low pressure RF plasma flow around the sample

    Science.gov (United States)

    Zheltukhin, V. S.; Shemakhin, A. Yu

    2017-01-01

    The paper describes a mathematical model of the flow of radio frequency plasma at low pressure. The hybrid mathematical model includes the Boltzmann equation for the neutral component of the RF plasma, the continuity and the thermal equations for the charged component. Initial and boundary conditions for the corresponding equations are described. The electron temperature in the calculations is 1-4 eV, atoms temperature in the plasma clot is (3-4) • 103 K, in the plasma jet is (3.2-10) • 102 K, the degree of ionization is 10-7-10-5, electron density is 1015-1019 m-3. For calculations plasma parameters is developed soft package on C++ program language, that uses the OpenFOAM library package. Simulations for the vacuum chamber in the presence of a sample and the free jet flow were carried out.

  11. The problem of sampling families rather than populations: Relatedness among individuals in samples of juvenile brown trout Salmo trutta L

    DEFF Research Database (Denmark)

    Hansen, Michael Møller; Eg Nielsen, Einar; Mensberg, Karen-Lise Dons

    1997-01-01

    In species exhibiting a nonrandom distribution of closely related individuals, sampling of a few families may lead to biased estimates of allele frequencies in populations. This problem was studied in two brown trout populations, based on analysis of mtDNA and microsatellites. In both samples mt......DNA haplotype frequencies differed significantly between age classes, and in one sample 17 out of 18 individuals less than 1 year of age shared one particular mtDNA haplotype. Estimates of relatedness showed that these individuals most likely represented only three full-sib families. Older trout exhibiting...

  12. Optimasi Penjadwalan Pengerjaan Software Pada Software House Dengan Flow-Shop Problem Menggunakan Artificial Bee Colony

    Directory of Open Access Journals (Sweden)

    Muhammad Fhadli

    2016-12-01

    This research proposed an implementation related to software execution scheduling process at a software house with Flow-Shop Problem (FSP using Artificial Bee Colony (ABC algorithm. Which in FSP required a solution to complete some job/task along with its overall cost at a minimum. There is a constraint that should be kept to note in this research, that is the uncertainty completion time of its jobs. In this research, we will present a solution that is a sequence order of project execution with its overall completion time at a minimum. An experiment will be performed with 3 attempts on each experiment conditions, that is an experiment of iteration parameter and experiment of limit parameter. From this experiment, we concluded that the use of this algorithm explained in this paper can reduce project execution time if we increase the value of total iteration and total colony. Keywords: optimization, flow-shop problem, artificial bee colony, swarm intelligence, meta-heuristic.

  13. Complementary Constrains on Component based Multiphase Flow Problems, Should It Be Implemented Locally or Globally?

    Science.gov (United States)

    Shao, H.; Huang, Y.; Kolditz, O.

    2015-12-01

    Multiphase flow problems are numerically difficult to solve, as it often contains nonlinear Phase transition phenomena A conventional technique is to introduce the complementarity constraints where fluid properties such as liquid saturations are confined within a physically reasonable range. Based on such constraints, the mathematical model can be reformulated into a system of nonlinear partial differential equations coupled with variational inequalities. They can be then numerically handled by optimization algorithms. In this work, two different approaches utilizing the complementarity constraints based on persistent primary variables formulation[4] are implemented and investigated. The first approach proposed by Marchand et.al[1] is using "local complementary constraints", i.e. coupling the constraints with the local constitutive equations. The second approach[2],[3] , namely the "global complementary constrains", applies the constraints globally with the mass conservation equation. We will discuss how these two approaches are applied to solve non-isothermal componential multiphase flow problem with the phase change phenomenon. Several benchmarks will be presented for investigating the overall numerical performance of different approaches. The advantages and disadvantages of different models will also be concluded. References[1] E.Marchand, T.Mueller and P.Knabner. Fully coupled generalized hybrid-mixed finite element approximation of two-phase two-component flow in porous media. Part I: formulation and properties of the mathematical model, Computational Geosciences 17(2): 431-442, (2013). [2] A. Lauser, C. Hager, R. Helmig, B. Wohlmuth. A new approach for phase transitions in miscible multi-phase flow in porous media. Water Resour., 34,(2011), 957-966. [3] J. Jaffré, and A. Sboui. Henry's Law and Gas Phase Disappearance. Transp. Porous Media. 82, (2010), 521-526. [4] A. Bourgeat, M. Jurak and F. Smaï. Two-phase partially miscible flow and transport modeling in

  14. On a boundary layer problem related to the gas flow in shales

    KAUST Repository

    Barenblatt, G. I.

    2013-01-16

    The development of gas deposits in shales has become a significant energy resource. Despite the already active exploitation of such deposits, a mathematical model for gas flow in shales does not exist. Such a model is crucial for optimizing the technology of gas recovery. In the present article, a boundary layer problem is formulated and investigated with respect to gas recovery from porous low-permeability inclusions in shales, which are the basic source of gas. Milton Van Dyke was a great master in the field of boundary layer problems. Dedicating this work to his memory, we want to express our belief that Van Dyke\\'s profound ideas and fundamental book Perturbation Methods in Fluid Mechanics (Parabolic Press, 1975) will live on-also in fields very far from the subjects for which they were originally invented. © 2013 US Government.

  15. Flow cytometric evaluation of peripheral blood and bone marrow and fine-needle aspirate samples from multiple sites in dogs with multicentric lymphoma.

    Science.gov (United States)

    Joetzke, Alexa E; Eberle, Nina; Nolte, Ingo; Mischke, Reinhard; Simon, Daniela

    2012-06-01

    To determine whether the extent of disease in dogs with lymphoma can be assessed via flow cytometry and to evaluate the suitability of fine-needle aspirates from the liver and spleen of dogs for flow cytometric examination. 44 dogs with multicentric B-cell (n = 35) or T-cell lymphoma (9) and 5 healthy control dogs. Procedures-Peripheral blood and bone marrow samples and fine-needle aspirates of lymph node, liver, and spleen were examined via flow cytometry. Logarithmically transformed T-cell-to-B-cell percentage ratio (log[T:B]) values were calculated. Thresholds defined by use of log(T:B) values of samples from control dogs were used to determine extranodal lymphoma involvement in lymphoma-affected dogs; results were compared with cytologic findings. 12 of 245 (5%) samples (9 liver, 1 spleen, and 2 bone marrow) had insufficient cellularity for flow cytometric evaluation. Mean log(T:B) values of samples from dogs with B-cell lymphoma were significantly lower than those of samples from the same site in dogs with T-cell lymphoma and in control dogs. In dogs with T-cell lymphoma, the log(T:B) of lymph node, bone marrow, and spleen samples was significantly higher than in control dogs. Of 165 samples assessed for extranodal lymphoma involvement, 116 (70%) tested positive via flow cytometric analysis; results agreed with cytologic findings in 133 of 161 (83%) samples evaluated via both methods. Results suggested that flow cytometry may aid in detection of extranodal lymphoma involvement in dogs, but further research is needed. Most fine-needle aspirates of liver and spleen were suitable for flow cytometric evaluation.

  16. Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2012-01-01

    Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf

  17. Simulation of rarefied low pressure RF plasma flow around the sample

    International Nuclear Information System (INIS)

    Zheltukhin, V S; Shemakhin, A Yu

    2017-01-01

    The paper describes a mathematical model of the flow of radio frequency plasma at low pressure. The hybrid mathematical model includes the Boltzmann equation for the neutral component of the RF plasma, the continuity and the thermal equations for the charged component. Initial and boundary conditions for the corresponding equations are described. The electron temperature in the calculations is 1-4 eV, atoms temperature in the plasma clot is (3-4) • 10 3 K, in the plasma jet is (3.2-10) • 10 2 K, the degree of ionization is 10 -7 -10 -5 , electron density is 10 15 -10 19 m -3 . For calculations plasma parameters is developed soft package on C++ program language, that uses the OpenFOAM library package. Simulations for the vacuum chamber in the presence of a sample and the free jet flow were carried out. (paper)

  18. Robustness to non-normality of various tests for the one-sample location problem

    Directory of Open Access Journals (Sweden)

    Michelle K. McDougall

    2004-01-01

    Full Text Available This paper studies the effect of the normal distribution assumption on the power and size of the sign test, Wilcoxon's signed rank test and the t-test when used in one-sample location problems. Power functions for these tests under various skewness and kurtosis conditions are produced for several sample sizes from simulated data using the g-and-k distribution of MacGillivray and Cannon [5].

  19. Reynolds analogy for the Rayleigh problem at various flow modes.

    Science.gov (United States)

    Abramov, A A; Butkovskii, A V

    2016-07-01

    The Reynolds analogy and the extended Reynolds analogy for the Rayleigh problem are considered. For a viscous incompressible fluid we derive the Reynolds analogy as a function of the Prandtl number and the Eckert number. We show that for any positive Eckert number, the Reynolds analogy as a function of the Prandtl number has a maximum. For a monatomic gas in the transitional flow regime, using the direct simulation Monte Carlo method, we investigate the extended Reynolds analogy, i.e., the relation between the shear stress and the energy flux transferred to the boundary surface, at different velocities and temperatures. We find that the extended Reynolds analogy for a rarefied monatomic gas flow with the temperature of the undisturbed gas equal to the surface temperature depends weakly on time and is close to 0.5. We show that at any fixed dimensionless time the extended Reynolds analogy depends on the plate velocity and temperature and undisturbed gas temperature mainly via the Eckert number. For Eckert numbers of the order of unity or less we generalize an extended Reynolds analogy. The generalized Reynolds analogy depends mainly only on dimensionless time for all considered Eckert numbers of the order of unity or less.

  20. SWIFT self-teaching curriculum. Illustrative problems to supplement the user's manual for the Sandia Waste-Isolation Flow and Transport model (SWIFT)

    International Nuclear Information System (INIS)

    Finley, N.C.; Reeves, M.

    1982-03-01

    This document contains a series of sample problems and solutions for the Sandia Waste-Isolation Flow and Transport (SWIFT) model developed at Sandia National Laboratories for the Risk Methodology for Geologic Disposal of Radioactive Waste Program (A-1192). With this document and the SWIFT User's Manual, the student may familiarize himself with the code, its capabilities and limitations. When the student has completed this curriculum, he or she should be able to prepare data input for SWIFT and have some insights into interpretation of the model output. This report represents one of a series of self-teaching curricula prepared under a technology transfer contract for the US Nuclear Regulatory Commission, Office of Nuclear Material Safety and Safeguards

  1. Installation of a flow cytometry facility and some applications in radiobiology

    International Nuclear Information System (INIS)

    Walsh, M.; Kellington, J.P.

    1988-01-01

    Flow cytometry has enormous potential in many areas of experimental pathology. Details of the installation and commissioning of a flow cytometer at the Harwell Laboratory are described. Following an explanation of the principles of flow cytometry, several applications to specific problems in radiobiology are discussed. Also included are results of some preliminary studies with the Harwell flow cytometer on samples such as blood, bone marrow, macrophages and cell cultures, and a discussion of future applications. (author)

  2. Counterbalancing hydrodynamic sample distortion effects increases resolution of free-flow zone electrophoresis.

    Science.gov (United States)

    Weber, G; Bauer, J

    1998-06-01

    On fractionation of highly heterogeneous protein mixtures, optimal resolution was achieved by forcing proteins to migrate through a preestablished pH gradient, until they entered a medium with a pH similar but not equal to their pIs. For this purpose, up to seven different media were pumped through the electrophoresis chamber so that they were flowing adjacently to each other, forming a pH gradient declining stepwise from the cathode to the anode. This gradient had a sufficiently strong band-focusing effect to counterbalance sample distortion effects of the flowing medium as proteins approached their isoelectric medium closer than 0.5 pH units. Continuous free-flow zone electrophoresis (FFZE) with high throughput capability was applicable if proteins did not precipitate or aggregate in these media. If components of heterogeneous protein mixtures had already started to precipitate or aggregate, in a medium with a pH exceeding their pI by more than 0.5 pH units, the application of interval modus and media forming flat pH gradients appeared advantageous.

  3. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  4. A new quantum inspired chaotic artificial bee colony algorithm for optimal power flow problem

    International Nuclear Information System (INIS)

    Yuan, Xiaohui; Wang, Pengtao; Yuan, Yanbin; Huang, Yuehua; Zhang, Xiaopan

    2015-01-01

    Highlights: • Quantum theory is introduced to artificial bee colony algorithm (ABC) to increase population diversity. • A chaotic local search operator is used to enhance local search ability of ABC. • Quantum inspired chaotic ABC method (QCABC) is proposed to solve optimal power flow. • The feasibility and effectiveness of the proposed QCABC is verified by examples. - Abstract: This paper proposes a new artificial bee colony algorithm with quantum theory and the chaotic local search strategy (QCABC), and uses it to solve the optimal power flow (OPF) problem. Under the quantum computing theory, the QCABC algorithm encodes each individual with quantum bits to form a corresponding quantum bit string. By determining each quantum bits value, we can get the value of the individual. After the scout bee stage of the artificial bee colony algorithm, we begin the chaotic local search in the vicinity of the best individual found so far. Finally, the quantum rotation gate is used to process each quantum bit so that all individuals can update toward the direction of the best individual. The QCABC algorithm is carried out to deal with the OPF problem in the IEEE 30-bus and IEEE 118-bus standard test systems. The results of the QCABC algorithm are compared with other algorithms (artificial bee colony algorithm, genetic algorithm, particle swarm optimization algorithm). The comparison shows that the QCABC algorithm can effectively solve the OPF problem and it can get the better optimal results than those of other algorithms

  5. Flow regimes

    International Nuclear Information System (INIS)

    Kh'yuitt, G.

    1980-01-01

    An introduction into the problem of two-phase flows is presented. Flow regimes arizing in two-phase flows are described, and classification of these regimes is given. Structures of vertical and horizontal two-phase flows and a method of their identification using regime maps are considered. The limits of this method application are discussed. The flooding phenomena and phenomena of direction change (flow reversal) of the flow and interrelation of these phenomena as well as transitions from slug regime to churn one and from churn one to annular one in vertical flows are described. Problems of phase transitions and equilibrium are discussed. Flow regimes in tubes where evaporating liquid is running, are described [ru

  6. Self-recognition of mental health problems in a rural Australian sample.

    Science.gov (United States)

    Handley, Tonelle E; Lewin, Terry J; Perkins, David; Kelly, Brian

    2018-04-19

    Although mental health literacy has increased in recent years, mental illness is often under-recognised. There has been little research conducted on mental illness in rural areas; however, this can be most prominent in rural areas due to factors such as greater stigma and stoicism. The aim of this study is to create a profile of those who are most and least likely to self-identify mental health problems among rural residents with moderate- to-high psychological distress. Secondary analysis of a longitudinal postal survey. Rural and remote New South Wales, Australia. Four-hundred-and-seventy-two community residents. Participants completed the K10 Psychological Distress Scale, as well as the question 'In the past 12 months have you experienced any mental health problems?' The characteristics of those who reported moderate/high distress scores were explored by comparing those who did and did not experience mental health problems recently. Of the 472 participants, 319 (68%) with moderate/high distress reported a mental health problem. Reporting a mental health problem was higher among those with recent adverse life events or who perceived more stress from life events while lower among those who attributed their symptoms to a physical cause. Among a rural sample with moderate/high distress, one-third did not report a mental health problem. Results suggest a threshold effect, whereby mental health problems are more likely to be acknowledged in the context of additional life events. Ongoing public health campaigns are necessary to ensure that symptoms of mental illness are recognised in the multiple forms that they take. © 2018 National Rural Health Alliance Ltd.

  7. Air-segmented continuous-flow analysis for molybdenum in various geochemical samples

    International Nuclear Information System (INIS)

    Harita, Y.; Sugiyama, M.; Hori, T.

    2003-01-01

    An air-segmented continuous-flow method has been developed for the determination of molybdenum at ultra trace levels using the catalytic effect of molybdate during the oxidation of L-ascorbic acid by hydrogen peroxide. Incorporation of an on-line ion exchange column improved the tolerance limit for various ions. The detection limits with and without the column were 64 pmol L m1 and 17 pmol L m1 , and the reproducibilities at 10 nmol L m1 were 2.1 % and 0.2 %, respectively. The proposed method was applied to the determination of molybdenum in seawater and lake water as well as in rock and sediment samples. This method has the highest sensitivity among the available literature to our knowledge, and is also convenient for routine analysis of molybdenum in various natural samples. (author)

  8. Solving phase appearance/disappearance two-phase flow problems with high resolution staggered grid and fully implicit schemes by the Jacobian-free Newton–Krylov Method

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-04-01

    The phase appearance/disappearance issue presents serious numerical challenges in two-phase flow simulations. Many existing reactor safety analysis codes use different kinds of treatments for the phase appearance/disappearance problem. However, to our best knowledge, there are no fully satisfactory solutions. Additionally, the majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many situations, it is desirable to use high-resolution spatial discretization and fully implicit time integration schemes to reduce numerical errors. In this work, we adapted a high-resolution spatial discretization scheme on staggered grid mesh and fully implicit time integration methods (such as BDF1 and BDF2) to solve the two-phase flow problems. The discretized nonlinear system was solved by the Jacobian-free Newton Krylov (JFNK) method, which does not require the derivation and implementation of analytical Jacobian matrix. These methods were tested with a few two-phase flow problems with phase appearance/disappearance phenomena considered, such as a linear advection problem, an oscillating manometer problem, and a sedimentation problem. The JFNK method demonstrated extremely robust and stable behaviors in solving the two-phase flow problems with phase appearance/disappearance. No special treatments such as water level tracking or void fraction limiting were used. High-resolution spatial discretization and second- order fully implicit method also demonstrated their capabilities in significantly reducing numerical errors.

  9. Efficient bounding schemes for the two-center hybrid flow shop scheduling problem with removal times.

    Science.gov (United States)

    Hidri, Lotfi; Gharbi, Anis; Louly, Mohamed Aly

    2014-01-01

    We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures.

  10. Solving the minimum flow problem with interval bounds and flows

    Indian Academy of Sciences (India)

    ... with crisp data. In this paper, the idea of Ghiyasvand was extended for solving the minimum flow problem with interval-valued lower, upper bounds and flows. This problem can be solved using two minimum flow problems with crisp data. Then, this result is extended to networks with fuzzy lower, upper bounds and flows.

  11. Complex variable boundary elements for fluid flow; Robni elementi kompleksne spremenljivke za pretok fluidov

    Energy Technology Data Exchange (ETDEWEB)

    Bizjak, D; Alujevic, A [Institut ' Jozef Stefan' , Ljubljana (Yugoslavia)

    1988-07-01

    The Complex Variable Boundary Element Method is a numerical method for solving two-dimensional problems of Laplace or Poisson type. It is based on the theory of analytic functions. This paper resumes the basic facts about the method. Application of the method to the stationary incompressible irrotational flow is carried out after that. At the end, a sample problem of flow through an abrupt area change channel is shown. (author)

  12. Nested sparse grid collocation method with delay and transformation for subsurface flow and transport problems

    Science.gov (United States)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-06-01

    In numerical modeling of subsurface flow and transport problems, formation properties may not be deterministically characterized, which leads to uncertainty in simulation results. In this study, we propose a sparse grid collocation method, which adopts nested quadrature rules with delay and transformation to quantify the uncertainty of model solutions. We show that the nested Kronrod-Patterson-Hermite quadrature is more efficient than the unnested Gauss-Hermite quadrature. We compare the convergence rates of various quadrature rules including the domain truncation and domain mapping approaches. To further improve accuracy and efficiency, we present a delayed process in selecting quadrature nodes and a transformed process for approximating unsmooth or discontinuous solutions. The proposed method is tested by an analytical function and in one-dimensional single-phase and two-phase flow problems with different spatial variances and correlation lengths. An additional example is given to demonstrate its applicability to three-dimensional black-oil models. It is found from these examples that the proposed method provides a promising approach for obtaining satisfactory estimation of the solution statistics and is much more efficient than the Monte-Carlo simulations.

  13. Adaptive Sampling for Nonlinear Dimensionality Reduction Based on Manifold Learning

    DEFF Research Database (Denmark)

    Franz, Thomas; Zimmermann, Ralf; Goertz, Stefan

    2017-01-01

    We make use of the non-intrusive dimensionality reduction method Isomap in order to emulate nonlinear parametric flow problems that are governed by the Reynolds-averaged Navier-Stokes equations. Isomap is a manifold learning approach that provides a low-dimensional embedding space that is approxi...... to detect and fill up gaps in the sampling in the embedding space. The performance of the proposed manifold filling method will be illustrated by numerical experiments, where we consider nonlinear parameter-dependent steady-state Navier-Stokes flows in the transonic regime.......We make use of the non-intrusive dimensionality reduction method Isomap in order to emulate nonlinear parametric flow problems that are governed by the Reynolds-averaged Navier-Stokes equations. Isomap is a manifold learning approach that provides a low-dimensional embedding space...

  14. Mapping cell populations in flow cytometry data for cross‐sample comparison using the Friedman–Rafsky test statistic as a distance measure

    Science.gov (United States)

    Hsiao, Chiaowen; Liu, Mengya; Stanton, Rick; McGee, Monnie; Qian, Yu

    2015-01-01

    Abstract Flow cytometry (FCM) is a fluorescence‐based single‐cell experimental technology that is routinely applied in biomedical research for identifying cellular biomarkers of normal physiological responses and abnormal disease states. While many computational methods have been developed that focus on identifying cell populations in individual FCM samples, very few have addressed how the identified cell populations can be matched across samples for comparative analysis. This article presents FlowMap‐FR, a novel method for cell population mapping across FCM samples. FlowMap‐FR is based on the Friedman–Rafsky nonparametric test statistic (FR statistic), which quantifies the equivalence of multivariate distributions. As applied to FCM data by FlowMap‐FR, the FR statistic objectively quantifies the similarity between cell populations based on the shapes, sizes, and positions of fluorescence data distributions in the multidimensional feature space. To test and evaluate the performance of FlowMap‐FR, we simulated the kinds of biological and technical sample variations that are commonly observed in FCM data. The results show that FlowMap‐FR is able to effectively identify equivalent cell populations between samples under scenarios of proportion differences and modest position shifts. As a statistical test, FlowMap‐FR can be used to determine whether the expression of a cellular marker is statistically different between two cell populations, suggesting candidates for new cellular phenotypes by providing an objective statistical measure. In addition, FlowMap‐FR can indicate situations in which inappropriate splitting or merging of cell populations has occurred during gating procedures. We compared the FR statistic with the symmetric version of Kullback–Leibler divergence measure used in a previous population matching method with both simulated and real data. The FR statistic outperforms the symmetric version of KL‐distance in distinguishing

  15. Mapping cell populations in flow cytometry data for cross-sample comparison using the Friedman-Rafsky test statistic as a distance measure.

    Science.gov (United States)

    Hsiao, Chiaowen; Liu, Mengya; Stanton, Rick; McGee, Monnie; Qian, Yu; Scheuermann, Richard H

    2016-01-01

    Flow cytometry (FCM) is a fluorescence-based single-cell experimental technology that is routinely applied in biomedical research for identifying cellular biomarkers of normal physiological responses and abnormal disease states. While many computational methods have been developed that focus on identifying cell populations in individual FCM samples, very few have addressed how the identified cell populations can be matched across samples for comparative analysis. This article presents FlowMap-FR, a novel method for cell population mapping across FCM samples. FlowMap-FR is based on the Friedman-Rafsky nonparametric test statistic (FR statistic), which quantifies the equivalence of multivariate distributions. As applied to FCM data by FlowMap-FR, the FR statistic objectively quantifies the similarity between cell populations based on the shapes, sizes, and positions of fluorescence data distributions in the multidimensional feature space. To test and evaluate the performance of FlowMap-FR, we simulated the kinds of biological and technical sample variations that are commonly observed in FCM data. The results show that FlowMap-FR is able to effectively identify equivalent cell populations between samples under scenarios of proportion differences and modest position shifts. As a statistical test, FlowMap-FR can be used to determine whether the expression of a cellular marker is statistically different between two cell populations, suggesting candidates for new cellular phenotypes by providing an objective statistical measure. In addition, FlowMap-FR can indicate situations in which inappropriate splitting or merging of cell populations has occurred during gating procedures. We compared the FR statistic with the symmetric version of Kullback-Leibler divergence measure used in a previous population matching method with both simulated and real data. The FR statistic outperforms the symmetric version of KL-distance in distinguishing equivalent from nonequivalent cell

  16. Non-erotic thoughts, attentional focus, and sexual problems in a community sample.

    Science.gov (United States)

    Nelson, Andrea L; Purdon, Christine

    2011-04-01

    According to Barlow's model of sexual dysfunction, anxiety in sexual situations leads to attentional focus on sexual performance at the expense of erotic cues, which compromises sexual arousal. This negative experience will enhance anxiety in future sexual situations, and non-erotic thoughts (NETs) relevant to performance will receive attentional priority. Previous research with student samples (Purdon & Holdaway, 2006; Purdon & Watson, 2010) has found that people experience many types of NETs in addition to performance-relevant thoughts, and that, consistent with Barlow's model, the frequency of and anxiety evoked by these thoughts is positively associated with sexual problems. Extending this previous work, the current study found that, in a community sample of women (N = 81) and men (N = 72) in long-term relationships, women were more likely to report body image concerns and external consequences of the sexual activity, while men were more likely to report performance-related concerns. Equally likely among men and women were thoughts about emotional consequences of the sexual activity. Regardless of thought content, experiencing more frequent NETs was associated with more sexual problems in both women and men. Moreover, as per Barlow's model, greater negative affect in anticipation of and during sexual activity predicted greater frequency of NETs and greater anxiety in response to NETs was associated with greater difficulty dismissing the thoughts. However, greater difficulty in refocusing on erotic thoughts during sexual activity uniquely predicted more sexual problems above the frequency and dismissability of NETs. Together, these data support the cognitive interference mechanism implicated by Barlow's causal model of sexual dysfunction and have implications for the treatment of sexual problems.

  17. Phase identification of quasi-periodic flow measured by particle image velocimetry with a low sampling rate

    International Nuclear Information System (INIS)

    Pan, Chong; Wang, Hongping; Wang, Jinjun

    2013-01-01

    This work mainly deals with the proper orthogonal decomposition (POD) time coefficient method used for extracting phase information from quasi-periodic flow. The mathematical equivalence between this method and the traditional cross-correlation method is firstly proved. A two-dimensional circular cylinder wake flow measured by time-resolved particle image velocimetry within a range of Reynolds numbers is then used to evaluate the reliability of this method. The effect of both the sampling rate and Reynolds number on the identification accuracy is finally discussed. It is found that the POD time coefficient method provides a convenient alternative for phase identification, whose feasibility in low-sampling-rate measurement has additional advantages for experimentalists. (paper)

  18. Adaptive solution of some steady-state fluid-structure interaction problems

    International Nuclear Information System (INIS)

    Etienne, S.; Pelletier, D.

    2003-01-01

    This paper presents a general integrated and coupled formulation for modeling the steady-state interaction of a viscous incompressible flow with an elastic structure undergoing large displacements (geometric non-linearities). This constitutes an initial step towards developing a sensitivity analysis formulation for this class of problems. The formulation uses velocity and pressures as unknowns in a flow domain and displacements in the structural components. An interface formulation is presented that leads to clear and simple finite element implementation of the equilibrium conditions at the fluid-solid interface. Issues of error estimation and mesh adaptation are discussed. The adaptive formulation is verified on a problem with a closed form solution. It is then applied to a sample case for which the structure undergoes large displacements induced by the flow. (author)

  19. Material flow analysis of NdFeB magnets for Denmark: a comprehensive waste flow sampling and analysis approach.

    Science.gov (United States)

    Habib, Komal; Schibye, Peter Klausen; Vestbø, Andreas Peter; Dall, Ole; Wenzel, Henrik

    2014-10-21

    Neodymium-iron-boron (NdFeB) magnets have become highly desirable for modern hi-tech applications. These magnets, in general, contain two key rare earth elements (REEs), i.e., neodymium (Nd) and dysprosium (Dy), which are responsible for the very high strength of these magnets, allowing for considerable size and weight reduction in modern applications. This study aims to explore the current and future potential of a secondary supply of neodymium and dysprosium from recycling of NdFeB magnets. For this purpose, material flow analysis (MFA) has been carried out to perform the detailed mapping of stocks and flows of NdFeB magnets in Denmark. A novel element of this study is the value added to the traditionally practiced MFAs at national and/or global levels by complementing them with a comprehensive sampling and elemental analysis of NdFeB magnets, taken out from a sample of 157 different products representing 18 various product types. The results show that the current amount of neodymium and dysprosium in NdFeB magnets present in the Danish waste stream is only 3 and 0.2 Mg, respectively. However, this number is estimated to increase to 175 Mg of neodymium and 11.4 Mg of dysprosium by 2035. Nevertheless, efficient recovery of these elements from a very diverse electronic waste stream remains a logistic and economic challenge.

  20. Sampling problems and the determination of mercury in surface water, seawater, and air

    International Nuclear Information System (INIS)

    Das, H.A.; van der Sloot, H.A.

    1976-01-01

    Analysis of surface water for mercury comprises the determination of both ionic and organically bound mercury in solution and that of the total mercury content of the suspended matter. Eventually, metallic mercury has to be determined too. Requirements for the sampling procedure are given. A method for the routine determination of mercury in surface water and seawater was developed and applied to Dutch surface waters. The total sample volume is 2500 ml. About 500 ml is used for the determination of the content of suspended matter and the total amount of mercury in the water. The sample is filtered through a bed of previously purified active charcoal at a low flow-rate. The main portion ca. 2000 ml) passes a flow-through centrifuge to separate the solid fraction. One liter is used to separate ''inorganic'' mercury by reduction, volatilization in an airstream and adsorption on active charcoal. The other liter is led through a column of active charcoal to collect all mercury. The procedures were checked with 197 Hg radiotracer both as an ion and incorporated in organic compounds. The mercury is determined by thermal neutron activation, followed by volatilization in a tube furnace and adsorption on a fresh carbon bed. The limit of determination is approximately equal to 1 ng 1 -1 . The rate of desorption from and adsorption on suspended material has been measured as a function of a pH of the solution for Hg +2 and various other ions. It can be concluded that only the procedure mentioned above does not disturb the equilibrium. The separation of mercury from air is obtained by suction of 1 m 3 through a 0.22 μm filter and a charcoal bed. The determination is then performed as in the case of the water samples

  1. Flow area optimization in point to area or area to point flows

    International Nuclear Information System (INIS)

    Ghodoossi, Lotfollah; Egrican, Niluefer

    2003-01-01

    This paper deals with the constructal theory of generation of shape and structure in flow systems connecting one point to a finite size area. The flow direction may be either from the point to the area or the area to the point. The formulation of the problem remains the same if the flow direction is reversed. Two models are used in optimization of the point to area or area to point flow problem: cost minimization and revenue maximization. The cost minimization model enables one to predict the shape of the optimized flow areas, but the geometric sizes of the flow areas are not predictable. That is, as an example, if the area of flow is a rectangle with a fixed area size, optimization of the point to area or area to point flow problem by using the cost minimization model will only predict the height/length ratio of the rectangle not the height and length itself. By using the revenue maximization model in optimization of the flow problems, all optimized geometric aspects of the interested flow areas will be derived as well. The aim of this paper is to optimize the point to area or area to point flow problems in various elemental flow area shapes and various structures of the flow system (various combinations of elemental flow areas) by using the revenue maximization model. The elemental flow area shapes used in this paper are either rectangular or triangular. The forms of the flow area structure, made up of an assembly of optimized elemental flow areas to obtain bigger flow areas, are rectangle-in-rectangle, rectangle-in-triangle, triangle-in-triangle and triangle-in-rectangle. The global maximum revenue, revenue collected per unit flow area and the shape and sizes of each flow area structure have been derived in optimized conditions. The results for each flow area structure have been compared with the results of the other structures to determine the structure that provides better performance. The conclusion is that the rectangle-in-triangle flow area structure

  2. Optimal experiment design in a filtering context with application to sampled network data

    OpenAIRE

    Singhal, Harsh; Michailidis, George

    2010-01-01

    We examine the problem of optimal design in the context of filtering multiple random walks. Specifically, we define the steady state E-optimal design criterion and show that the underlying optimization problem leads to a second order cone program. The developed methodology is applied to tracking network flow volumes using sampled data, where the design variable corresponds to controlling the sampling rate. The optimal design is numerically compared to a myopic and a naive strategy. Finally, w...

  3. Direct sampling during multiple sediment density flows reveals dynamic sediment transport and depositional environment in Monterey submarine canyon

    Science.gov (United States)

    Maier, K. L.; Gales, J. A.; Paull, C. K.; Gwiazda, R.; Rosenberger, K. J.; McGann, M.; Lundsten, E. M.; Anderson, K.; Talling, P.; Xu, J.; Parsons, D. R.; Barry, J.; Simmons, S.; Clare, M. A.; Carvajal, C.; Wolfson-Schwehr, M.; Sumner, E.; Cartigny, M.

    2017-12-01

    Sediment density flows were directly sampled with a coupled sediment trap-ADCP-instrument mooring array to evaluate the character and frequency of turbidity current events through Monterey Canyon, offshore California. This novel experiment aimed to provide links between globally significant sediment density flow processes and their resulting deposits. Eight to ten Anderson sediment traps were repeatedly deployed at 10 to 300 meters above the seafloor on six moorings anchored at 290 to 1850 meters water depth in the Monterey Canyon axial channel during 6-month deployments (October 2015 - April 2017). Anderson sediment traps include a funnel and intervalometer (discs released at set time intervals) above a meter-long tube, which preserves fine-scale stratigraphy and chronology. Photographs, multi-sensor logs, CT scans, and grain size analyses reveal layers from multiple sediment density flow events that carried sediment ranging from fine sand to granules. More sediment accumulation from sediment density flows, and from between flows, occurred in the upper canyon ( 300 - 800 m water depth) compared to the lower canyon ( 1300 - 1850 m water depth). Sediment accumulated in the traps during sediment density flows is sandy and becomes finer down-canyon. In the lower canyon where sediment directly sampled from density flows are clearly distinguished within the trap tubes, sands have sharp basal contacts, normal grading, and muddy tops that exhibit late-stage pulses. In at least two of the sediment density flows, the simultaneous low velocity and high backscatter measured by the ADCPs suggest that the trap only captured the collapsing end of a sediment density flow event. In the upper canyon, accumulation between sediment density flow events is twice as fast compared to the lower canyon; it is characterized by sub-cm-scale layers in muddy sediment that appear to have accumulated with daily to sub-daily frequency, likely related to known internal tidal dynamics also measured

  4. Solving global problem by considering multitude of local problems: Application to fluid flow in anisotropic porous media using the multipoint flux approximation

    KAUST Repository

    Salama, Amgad; Sun, Shuyu; Wheeler, Mary Fanett

    2014-01-01

    In this work we apply the experimenting pressure field approach to the numerical solution of the single phase flow problem in anisotropic porous media using the multipoint flux approximation. We apply this method to the problem of flow in saturated anisotropic porous media. In anisotropic media the component flux representation requires, generally multiple pressure values in neighboring cells (e.g., six pressure values of the neighboring cells is required in two-dimensional rectangular meshes). This apparently results in the need for a nine points stencil for the discretized pressure equation (27 points stencil in three-dimensional rectangular mesh). The coefficients associated with the discretized pressure equation are complex and require longer expressions which make their implementation prone to errors. In the experimenting pressure field technique, the matrix of coefficients is generated automatically within the solver. A set of predefined pressure fields is operated on the domain through which the velocity field is obtained. Apparently such velocity fields do not satisfy the mass conservation equations entailed by the source/sink term and boundary conditions from which the residual is calculated. In this method the experimenting pressure fields are designed such that the residual reduces to the coefficients of the pressure equation matrix. © 2014 Elsevier B.V. All rights reserved.

  5. Solving global problem by considering multitude of local problems: Application to fluid flow in anisotropic porous media using the multipoint flux approximation

    KAUST Repository

    Salama, Amgad

    2014-09-01

    In this work we apply the experimenting pressure field approach to the numerical solution of the single phase flow problem in anisotropic porous media using the multipoint flux approximation. We apply this method to the problem of flow in saturated anisotropic porous media. In anisotropic media the component flux representation requires, generally multiple pressure values in neighboring cells (e.g., six pressure values of the neighboring cells is required in two-dimensional rectangular meshes). This apparently results in the need for a nine points stencil for the discretized pressure equation (27 points stencil in three-dimensional rectangular mesh). The coefficients associated with the discretized pressure equation are complex and require longer expressions which make their implementation prone to errors. In the experimenting pressure field technique, the matrix of coefficients is generated automatically within the solver. A set of predefined pressure fields is operated on the domain through which the velocity field is obtained. Apparently such velocity fields do not satisfy the mass conservation equations entailed by the source/sink term and boundary conditions from which the residual is calculated. In this method the experimenting pressure fields are designed such that the residual reduces to the coefficients of the pressure equation matrix. © 2014 Elsevier B.V. All rights reserved.

  6. Recent bibliography on analytical and sampling problems of a PWR primary coolant Suppl. 4

    International Nuclear Information System (INIS)

    Illy, H.

    1986-09-01

    The 4th supplement of a bibliographical series comprising the analytical and sampling problems of the primary coolant of PWR type reactors covers the literature from 1985 up to July 1986 (220 items). References are listed according to the following topics: boric acid; chloride, chlorine; general; hydrogen isotopes; iodine; iodide; noble gases; oxygen; other elements; radiation monitoring; reactor safety; sampling; water chemistry. (V.N.)

  7. A novel flow injection chemiluminescence method for automated and miniaturized determination of phenols in smoked food samples.

    Science.gov (United States)

    Vakh, Christina; Evdokimova, Ekaterina; Pochivalov, Aleksei; Moskvin, Leonid; Bulatov, Andrey

    2017-12-15

    An easily performed fully automated and miniaturized flow injection chemiluminescence (CL) method for determination of phenols in smoked food samples has been proposed. This method includes the ultrasound assisted solid-liquid extraction coupled with gas-diffusion separation of phenols from smoked food sample and analytes absorption into a NaOH solution in a specially designed gas-diffusion cell. The flow system was designed to focus on automation and miniaturization with minimal sample and reagent consumption by inexpensive instrumentation. The luminol - N-bromosuccinimide system in an alkaline medium was used for the CL determination of phenols. The limit of detection of the proposed procedure was 3·10 -8 ·molL -1 (0.01mgkg -1 ) in terms of phenol. The presented method demonstrated to be a good tool for easy, rapid and cost-effective point-of-need screening phenols in smoked food samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Traffic Flow Optimization Using a Quantum Annealer

    Directory of Open Access Journals (Sweden)

    Florian Neukart

    2017-12-01

    Full Text Available Quantum annealing algorithms belong to the class of metaheuristic tools, applicable for solving binary optimization problems. Hardware implementations of quantum annealing, such as the quantum processing units (QPUs produced by D-Wave Systems, have been subject to multiple analyses in research, with the aim of characterizing the technology’s usefulness for optimization and sampling tasks. In this paper, we present a real-world application that uses quantum technologies. Specifically, we show how to map certain parts of a real-world traffic flow optimization problem to be suitable for quantum annealing. We show that time-critical optimization tasks, such as continuous redistribution of position data for cars in dense road networks, are suitable candidates for quantum computing. Due to the limited size and connectivity of current-generation D-Wave QPUs, we use a hybrid quantum and classical approach to solve the traffic flow problem.

  9. Microwave-Assisted Sample Treatment in a Fully Automated Flow-Based Instrument: Oxidation of Reduced Technetium Species in the Analysis of Total Technetium-99 in Caustic Aged Nuclear Waste Samples

    International Nuclear Information System (INIS)

    Egorov, Oleg B.; O'Hara, Matthew J.; Grate, Jay W.

    2004-01-01

    An automated flow-based instrument for microwave-assisted treatment of liquid samples has been developed and characterized. The instrument utilizes a flow-through reaction vessel design that facilitates the addition of multiple reagents during sample treatment, removal of the gaseous reaction products, and enables quantitative removal of liquids from the reaction vessel for carryover-free operations. Matrix modification and speciation control chemistries that are required for the radiochemical determination of total 99Tc in caustic aged nuclear waste samples have been investigated. A rapid and quantitative oxidation procedure using peroxydisulfate in acidic solution was developed to convert reduced technetium species to pertechnetate in samples with high content of reducing organics. The effectiveness of the automated sample treatment procedures has been validated in the radiochemical analysis of total 99Tc in caustic aged nuclear waste matrixes from the Hanford site

  10. An improved sheep flock heredity algorithm for job shop scheduling and flow shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Chandramouli Anandaraman

    2011-10-01

    Full Text Available Job Shop Scheduling Problem (JSSP and Flow Shop Scheduling Problem (FSSP are strong NP-complete combinatorial optimization problems among class of typical production scheduling problems. An improved Sheep Flock Heredity Algorithm (ISFHA is proposed in this paper to find a schedule of operations that can minimize makespan. In ISFHA, the pairwise mutation operation is replaced by a single point mutation process with a probabilistic property which guarantees the feasibility of the solutions in the local search domain. A Robust-Replace (R-R heuristic is introduced in place of chromosomal crossover to enhance the global search and to improve the convergence. The R-R heuristic is found to enhance the exploring potential of the algorithm and enrich the diversity of neighborhoods. Experimental results reveal the effectiveness of the proposed algorithm, whose optimization performance is markedly superior to that of genetic algorithms and is comparable to the best results reported in the literature.

  11. The `Henry Problem' of `density-driven' groundwater flow versus Tothian `groundwater flow systems' with variable density: A review of the influential Biscayne aquifer data.

    Science.gov (United States)

    Weyer, K. U.

    2017-12-01

    Coastal groundwater flow investigations at the Biscayne Bay, south of Miami, Florida, gave rise to the concept of density-driven flow of seawater into coastal aquifers creating a saltwater wedge. Within that wedge, convection-driven return flow of seawater and a dispersion zone were assumed by Cooper et al. (1964) to be the cause of the Biscayne aquifer `sea water wedge'. This conclusion was based on the chloride distribution within the aquifer and on an analytical model concept assuming convection flow within a confined aquifer without taking non-chemical field data into consideration. This concept was later labelled the `Henry Problem', which any numerical variable density flow program must be able to simulate to be considered acceptable. Both, `density-driven flow' and Tothian `groundwater flow systems' (with or without variable density conditions) are driven by gravitation. The difference between the two are the boundary conditions. 'Density-driven flow' occurs under hydrostatic boundary conditions while Tothian `groundwater flow systems' occur under hydrodynamic boundary conditions. Revisiting the Cooper et al. (1964) publication with its record of piezometric field data (heads) showed that the so-called sea water wedge has been caused by discharging deep saline groundwater driven by gravitational flow and not by denser sea water. Density driven flow of seawater into the aquifer was not found reflected in the head measurements for low and high tide conditions which had been taken contemporaneously with the chloride measurements. These head measurements had not been included in the flow interpretation. The very same head measurements indicated a clear dividing line between shallow local fresh groundwater flow and saline deep groundwater flow without the existence of a dispersion zone or a convection cell. The Biscayne situation emphasizes the need for any chemical interpretation of flow pattern to be supported by head data as energy indicators of flow fields

  12. Sample problems for the novice user of the AMPX-II system

    International Nuclear Information System (INIS)

    Ford, W.E. III; Roussin, R.W.; Petrie, L.M.; Diggs, B.R.; Comolander, H.E.

    1979-01-01

    Contents of the IBM version of the APMX system distributed by the Radiation Shielding Information Center (APMX-II) are described. Sample problems which demonstrate the procedure for implementing AMPX-II modules to generate point cross sections; generate multigroup neutron, photon production, and photon interaction cross sections for various transport codes; collapse multigroup cross sections; check, edit, and punch multigroup cross sections; and execute a one-dimensional discrete ordinates transport calculation are detailed. 25 figures, 9 tables

  13. A direct sampling method to an inverse medium scattering problem

    KAUST Repository

    Ito, Kazufumi

    2012-01-10

    In this work we present a novel sampling method for time harmonic inverse medium scattering problems. It provides a simple tool to directly estimate the shape of the unknown scatterers (inhomogeneous media), and it is applicable even when the measured data are only available for one or two incident directions. A mathematical derivation is provided for its validation. Two- and three-dimensional numerical simulations are presented, which show that the method is accurate even with a few sets of scattered field data, computationally efficient, and very robust with respect to noises in the data. © 2012 IOP Publishing Ltd.

  14. A MODIFIED DECOMPOSITION METHOD FOR SOLVING NONLINEAR PROBLEM OF FLOW IN CONVERGING- DIVERGING CHANNEL

    Directory of Open Access Journals (Sweden)

    MOHAMED KEZZAR

    2015-08-01

    Full Text Available In this research, an efficient technique of computation considered as a modified decomposition method was proposed and then successfully applied for solving the nonlinear problem of the two dimensional flow of an incompressible viscous fluid between nonparallel plane walls. In fact this method gives the nonlinear term Nu and the solution of the studied problem as a power series. The proposed iterative procedure gives on the one hand a computationally efficient formulation with an acceleration of convergence rate and on the other hand finds the solution without any discretization, linearization or restrictive assumptions. The comparison of our results with those of numerical treatment and other earlier works shows clearly the higher accuracy and efficiency of the used Modified Decomposition Method.

  15. A Hybrid Quantum Evolutionary Algorithm with Improved Decoding Scheme for a Robotic Flow Shop Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Weidong Lei

    2017-01-01

    Full Text Available We aim at solving the cyclic scheduling problem with a single robot and flexible processing times in a robotic flow shop, which is a well-known optimization problem in advanced manufacturing systems. The objective of the problem is to find an optimal robot move sequence such that the throughput rate is maximized. We propose a hybrid algorithm based on the Quantum-Inspired Evolutionary Algorithm (QEA and genetic operators for solving the problem. The algorithm integrates three different decoding strategies to convert quantum individuals into robot move sequences. The Q-gate is applied to update the states of Q-bits in each individual. Besides, crossover and mutation operators with adaptive probabilities are used to increase the population diversity. A repairing procedure is proposed to deal with infeasible individuals. Comparison results on both benchmark and randomly generated instances demonstrate that the proposed algorithm is more effective in solving the studied problem in terms of solution quality and computational time.

  16. Flow immune photoacoustic sensor for real-time and fast sampling of trace gases

    Science.gov (United States)

    Petersen, Jan C.; Balslev-Harder, David; Pelevic, Nikola; Brusch, Anders; Persijn, Stefan; Lassen, Mikael

    2018-02-01

    A photoacoustic (PA) sensor for fast and real-time gas sensing is demonstrated. The PA cell has been designed for flow noise immunity using computational fluid dynamics (CFD) analysis. PA measurements were conducted at different flow rates by exciting molecular C-H stretch vibrational bands of hexane (C6H14) in clean air at 2950cm-1 (3.38 μm) with a custom made mid-infrared interband cascade laser (ICL). The PA sensor will contribute to solve a major problem in a number of industries using compressed air by the detection of oil contaminants in high purity compressed air. We observe a (1σ, standard deviation) sensitivity of 0.4 +/-0.1 ppb (nmol/mol) for hexane in clean air at flow rates up to 2 L/min, corresponding to a normalized noise equivalent absorption (NNEA) coefficient of 2.5×10-9 W cm-1 Hz1/2, thus demonstrating high sensitivity and fast and real-time gas analysis. The PA sensor is not limited to molecules with C-H stretching modes, but can be tailored to measure any trace gas by simply changing the excitation wavelength (i.e. the laser source) making it useful for many different applications where fast and sensitive trace gas measurements are needed.

  17. UMTRA ground water sampling techniques: Comparison of the traditional and low flow methods

    International Nuclear Information System (INIS)

    1995-07-01

    This report describes the potential changes in water quality data that may occur with the conversion from MBV (multiple bore volume) to LF (low flow) sampling and provides two examples of how such a change might impact Project decisions. The existing scientific literature on LF sampling is reviewed and the new LF data from three UMTRA Uranium Mill Tailings Remedial Action Project sites are evaluated seeking answers to the questions posed above. Several possible approaches, that the UMTRA Project may take to address issues unanswered by the literature are presented and compared, and a recommendation is offered for the future direction of the LF conversion effort

  18. Automatic flow-batch system for cold vapor atomic absorption spectroscopy determination of mercury in honey from Argentina using online sample treatment.

    Science.gov (United States)

    Domínguez, Marina A; Grünhut, Marcos; Pistonesi, Marcelo F; Di Nezio, María S; Centurión, María E

    2012-05-16

    An automatic flow-batch system that includes two borosilicate glass chambers to perform sample digestion and cold vapor atomic absorption spectroscopy determination of mercury in honey samples was designed. The sample digestion was performed by using a low-cost halogen lamp to obtain the optimum temperature. Optimization of the digestion procedure was done using a Box-Behnken experimental design. A linear response was observed from 2.30 to 11.20 μg Hg L(-1). The relative standard deviation was 3.20% (n = 11, 6.81 μg Hg L(-1)), the sample throughput was 4 sample h(-1), and the detection limit was 0.68 μg Hg L(-1). The obtained results with the flow-batch method are in good agreement with those obtained with the reference method. The flow-batch system is simple, allows the use of both chambers simultaneously, is seen as a promising methodology for achieving green chemistry goals, and is a good proposal to improving the quality control of honey.

  19. Local entropy as a measure for sampling solutions in constraint satisfaction problems

    International Nuclear Information System (INIS)

    Baldassi, Carlo; Ingrosso, Alessandro; Lucibello, Carlo; Saglietti, Luca; Zecchina, Riccardo

    2016-01-01

    We introduce a novel entropy-driven Monte Carlo (EdMC) strategy to efficiently sample solutions of random constraint satisfaction problems (CSPs). First, we extend a recent result that, using a large-deviation analysis, shows that the geometry of the space of solutions of the binary perceptron learning problem (a prototypical CSP), contains regions of very high-density of solutions. Despite being sub-dominant, these regions can be found by optimizing a local entropy measure. Building on these results, we construct a fast solver that relies exclusively on a local entropy estimate, and can be applied to general CSPs. We describe its performance not only for the perceptron learning problem but also for the random K-satisfiabilty problem (another prototypical CSP with a radically different structure), and show numerically that a simple zero-temperature Metropolis search in the smooth local entropy landscape can reach sub-dominant clusters of optimal solutions in a small number of steps, while standard Simulated Annealing either requires extremely long cooling procedures or just fails. We also discuss how the EdMC can heuristically be made even more efficient for the cases we studied. (paper: disordered systems, classical and quantum)

  20. On the solution of fluid flow and heat transfer problem in a 2D channel with backward-facing step

    Directory of Open Access Journals (Sweden)

    Alexander A. Fomin

    2017-06-01

    Full Text Available The stable stationary solutions of the test problem of hydrodynamics and heat transfer in a plane channel with the backward-facing step have been considered in the work for extremely high Reynolds numbers and expansion ratio of the stream $ER$. The problem has been solved by numerical integration of the 2D Navier–Stokes equations in ‘velocity-pressure’ formulation and the heat equation in the range of Reynolds number $500 \\leqslant \\mathrm{ Re} \\leqslant 3000$ and expansion ratio $1.43 \\leqslant ER \\leqslant 10$ for Prandtl number $\\mathrm{ Pr} = 0.71$. Validity of the results has been confirmed by comparing them with literature data. Detailed flow patterns, fields of stream overheating, and profiles of horizontal component of velocity and relative overheating of flow in the cross section of the channel have been presented. Complex behaviors of the coefficients of friction, hydrodynamic resistance and heat transfer (Nusselt number along the channel depending on the problem parameters have been analyzed.

  1. A New Automated Method and Sample Data Flow for Analysis of Volatile Nitrosamines in Human Urine*

    Science.gov (United States)

    Hodgson, James A.; Seyler, Tiffany H.; McGahee, Ernest; Arnstein, Stephen; Wang, Lanqing

    2016-01-01

    Volatile nitrosamines (VNAs) are a group of compounds classified as probable (group 2A) and possible (group 2B) carcinogens in humans. Along with certain foods and contaminated drinking water, VNAs are detected at high levels in tobacco products and in both mainstream and sidestream smoke. Our laboratory monitors six urinary VNAs—N-nitrosodimethylamine (NDMA), N-nitrosomethylethylamine (NMEA), N-nitrosodiethylamine (NDEA), N-nitrosopiperidine (NPIP), N-nitrosopyrrolidine (NPYR), and N-nitrosomorpholine (NMOR)—using isotope dilution GC-MS/MS (QQQ) for large population studies such as the National Health and Nutrition Examination Survey (NHANES). In this paper, we report for the first time a new automated sample preparation method to more efficiently quantitate these VNAs. Automation is done using Hamilton STAR™ and Caliper Staccato™ workstations. This new automated method reduces sample preparation time from 4 hours to 2.5 hours while maintaining precision (inter-run CV < 10%) and accuracy (85% - 111%). More importantly this method increases sample throughput while maintaining a low limit of detection (<10 pg/mL) for all analytes. A streamlined sample data flow was created in parallel to the automated method, in which samples can be tracked from receiving to final LIMs output with minimal human intervention, further minimizing human error in the sample preparation process. This new automated method and the sample data flow are currently applied in bio-monitoring of VNAs in the US non-institutionalized population NHANES 2013-2014 cycle. PMID:26949569

  2. Cooperative Strategies for Maximum-Flow Problem in Uncertain Decentralized Systems Using Reliability Analysis

    Directory of Open Access Journals (Sweden)

    Hadi Heidari Gharehbolagh

    2016-01-01

    Full Text Available This study investigates a multiowner maximum-flow network problem, which suffers from risky events. Uncertain conditions effect on proper estimation and ignoring them may mislead decision makers by overestimation. A key question is how self-governing owners in the network can cooperate with each other to maintain a reliable flow. Hence, the question is answered by providing a mathematical programming model based on applying the triangular reliability function in the decentralized networks. The proposed method concentrates on multiowner networks which suffer from risky time, cost, and capacity parameters for each network’s arcs. Some cooperative game methods such as τ-value, Shapley, and core center are presented to fairly distribute extra profit of cooperation. A numerical example including sensitivity analysis and the results of comparisons are presented. Indeed, the proposed method provides more reality in decision-making for risky systems, hence leading to significant profits in terms of real cost estimation when compared with unforeseen effects.

  3. Rapid determination of 99Tc in environmental samples by high resolution ICP-MS coupled with on-line flow injection system

    International Nuclear Information System (INIS)

    Kim, C.K.; Kim, C.S.; Rho, B.H.; Lee, J.I.

    2002-01-01

    High resolution inductively coupled plasma mass spectrometry coupled with an on-line flow injection system (FI-HR-ICP-MS) was applied to determine the ultra-trace level 99 Tc in soil. The flow injection system (PrepLab TM ) was composed of two TEVA-Spec R resins, reduced remarkably the sample amounts and the analysis time, compared to the conventional analytical methods. In the flow injection system, Mo and Ru were sufficiently eliminated by using the flow injection system, with the decontamination factors of 1.6 x 10 4 and 9.9 x 10 5 , respectively. With the present method, it was possible to determine ultra-low level of 99 Tc in 3∼6 soil at 3-5 hours of analysis time per sample. The relative standard deviation for each sample was less than 4%. The detection limits for 99 Tc was 85 fg x ml -1 (0.05 mBq x ml -1 ), which was calculated from the three times standard deviation of the count rate of the blank. (author)

  4. On a multigrid method for the coupled Stokes and porous media flow problem

    Science.gov (United States)

    Luo, P.; Rodrigo, C.; Gaspar, F. J.; Oosterlee, C. W.

    2017-07-01

    The multigrid solution of coupled porous media and Stokes flow problems is considered. The Darcy equation as the saturated porous medium model is coupled to the Stokes equations by means of appropriate interface conditions. We focus on an efficient multigrid solution technique for the coupled problem, which is discretized by finite volumes on staggered grids, giving rise to a saddle point linear system. Special treatment is required regarding the discretization at the interface. An Uzawa smoother is employed in multigrid, which is a decoupled procedure based on symmetric Gauss-Seidel smoothing for velocity components and a simple Richardson iteration for the pressure field. Since a relaxation parameter is part of a Richardson iteration, Local Fourier Analysis (LFA) is applied to determine the optimal parameters. Highly satisfactory multigrid convergence is reported, and, moreover, the algorithm performs well for small values of the hydraulic conductivity and fluid viscosity, that are relevant for applications.

  5. A Flow Chart of Behavior Management Strategies for Families of Children with Co-Occurring Attention-Deficit Hyperactivity Disorder and Conduct Problem Behavior.

    Science.gov (United States)

    Danforth, Jeffrey S

    2016-03-01

    Behavioral parent training is an evidence-based treatment for problem behavior described as attention-deficit hyperactivity disorder (ADHD), oppositional defiant disorder, and conduct disorder. However, adherence to treatment fidelity and parent performance of the management skills remains an obstacle to optimum outcome. One variable that may limit the effectiveness of the parent training is that demanding behavior management procedures can be deceptively complicated and difficult to perform. Based on outcome research for families of children with co-occurring ADHD and conduct problem behavior, an example of a visual behavior management flow chart is presented. The flow chart may be used to help teach specific behavior management skills to parents. The flow chart depicts a chain of behavior management strategies taught with explanation, modeling, and role-play with parents. The chained steps in the flow chart are elements common to well-known evidence-based behavior management strategies, and perhaps, this depiction well serve as a setting event for other behavior analysts to create flow charts for their own parent training, Details of the flow chart steps, as well as examples of specific applications and program modifications conclude.

  6. Asymptotic stability of shear-flow solutions to incompressible viscous free boundary problems with and without surface tension

    Science.gov (United States)

    Tice, Ian

    2018-04-01

    This paper concerns the dynamics of a layer of incompressible viscous fluid lying above a rigid plane and with an upper boundary given by a free surface. The fluid is subject to a constant external force with a horizontal component, which arises in modeling the motion of such a fluid down an inclined plane, after a coordinate change. We consider the problem both with and without surface tension for horizontally periodic flows. This problem gives rise to shear-flow equilibrium solutions, and the main thrust of this paper is to study the asymptotic stability of the equilibria in certain parameter regimes. We prove that there exists a parameter regime in which sufficiently small perturbations of the equilibrium at time t=0 give rise to global-in-time solutions that return to equilibrium exponentially in the case with surface tension and almost exponentially in the case without surface tension. We also establish a vanishing surface tension limit, which connects the solutions with and without surface tension.

  7. Load flow optimization and optimal power flow

    CERN Document Server

    Das, J C

    2017-01-01

    This book discusses the major aspects of load flow, optimization, optimal load flow, and culminates in modern heuristic optimization techniques and evolutionary programming. In the deregulated environment, the economic provision of electrical power to consumers requires knowledge of maintaining a certain power quality and load flow. Many case studies and practical examples are included to emphasize real-world applications. The problems at the end of each chapter can be solved by hand calculations without having to use computer software. The appendices are devoted to calculations of line and cable constants, and solutions to the problems are included throughout the book.

  8. Vessel Sampling and Blood Flow Velocity Distribution With Vessel Diameter for Characterizing the Human Bulbar Conjunctival Microvasculature.

    Science.gov (United States)

    Wang, Liang; Yuan, Jin; Jiang, Hong; Yan, Wentao; Cintrón-Colón, Hector R; Perez, Victor L; DeBuc, Delia C; Feuer, William J; Wang, Jianhua

    2016-03-01

    This study determined (1) how many vessels (i.e., the vessel sampling) are needed to reliably characterize the bulbar conjunctival microvasculature and (2) if characteristic information can be obtained from the distribution histogram of the blood flow velocity and vessel diameter. Functional slitlamp biomicroscope was used to image hundreds of venules per subject. The bulbar conjunctiva in five healthy human subjects was imaged on six different locations in the temporal bulbar conjunctiva. The histograms of the diameter and velocity were plotted to examine whether the distribution was normal. Standard errors were calculated from the standard deviation and vessel sample size. The ratio of the standard error of the mean over the population mean was used to determine the sample size cutoff. The velocity was plotted as a function of the vessel diameter to display the distribution of the diameter and velocity. The results showed that the sampling size was approximately 15 vessels, which generated a standard error equivalent to 15% of the population mean from the total vessel population. The distributions of the diameter and velocity were not only unimodal, but also somewhat positively skewed and not normal. The blood flow velocity was related to the vessel diameter (r=0.23, Psampling size of the vessels and the distribution histogram of the blood flow velocity and vessel diameter, which may lead to a better understanding of the human microvascular system of the bulbar conjunctiva.

  9. Immunophenotype Discovery, Hierarchical Organization, and Template-based Classification of Flow Cytometry Samples

    Directory of Open Access Journals (Sweden)

    Ariful Azad

    2016-08-01

    Full Text Available We describe algorithms for discovering immunophenotypes from large collections of flow cytometry (FC samples, and using them to organize the samples into a hierarchy based on phenotypic similarity. The hierarchical organization is helpful for effective and robust cytometry data mining, including the creation of collections of cell populations characteristic of different classes of samples, robust classification, and anomaly detection. We summarize a set of samples belonging to a biological class or category with a statistically derived template for the class. Whereas individual samples are represented in terms of their cell populations (clusters, a template consists of generic meta-populations (a group of homogeneous cell populations obtained from the samples in a class that describe key phenotypes shared among all those samples. We organize an FC data collection in a hierarchical data structure that supports the identification of immunophenotypes relevant to clinical diagnosis. A robust template-based classification scheme is also developed, but our primary focus is in the discovery of phenotypic signatures and inter-sample relationships in an FC data collection. This collective analysis approach is more efficient and robust since templates describe phenotypic signatures common to cell populations in several samples, while ignoring noise and small sample-specific variations.We have applied the template-base scheme to analyze several data setsincluding one representing a healthy immune system, and one of Acute Myeloid Leukemia (AMLsamples. The last task is challenging due to the phenotypic heterogeneity of the severalsubtypes of AML. However, we identified thirteen immunophenotypes corresponding to subtypes of AML, and were able to distinguish Acute Promyelocytic Leukemia from other subtypes of AML.

  10. A family of spatial interaction models incorporating information flows and choice set constraints applied to U.S. interstate labor flows.

    Science.gov (United States)

    Smith, T R; Slater, P B

    1981-01-01

    "A new family of migration models belonging to the elimination by aspects family is examined, with the spatial interaction model shown to be a special case. The models have simple forms; they incorporate information flow processes and choice set constraints; they are free of problems raised by the Luce Choice Axiom; and are capable of generating intransitive flows. Preliminary calibrations using the Continuous Work History Sample [time] series data indicate that the model fits the migration data well, while providing estimates of interstate job message flows. The preliminary calculations also indicate that care is needed in assuming that destination [attraction] are independent of origins." excerpt

  11. A hybrid flow shop model for an ice cream production scheduling problem

    Directory of Open Access Journals (Sweden)

    Imma Ribas Vila

    2009-07-01

    Full Text Available Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Taula normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} In this paper we address the scheduling problem that comes from an ice cream manufacturing company. This production system can be modelled as a three stage nowait hybrid flow shop with batch dependent setup costs. To contribute reducing the gap between theory and practice we have considered the real constraints and the criteria used by planners. The problem considered has been formulated as a mixed integer programming. Further, two competitive heuristic procedures have been developed and one of them will be proposed to schedule in the ice cream factory.

  12. Sustained impact of inattention and hyperactivity-impulsivity on peer problems: mediating roles of prosocial skills and conduct problems in a community sample of children.

    Science.gov (United States)

    Andrade, Brendan F; Tannock, Rosemary

    2014-06-01

    This prospective 2-year longitudinal study tested whether inattentive and hyperactive/impulsive symptom dimensions predicted future peer problems, when accounting for concurrent conduct problems and prosocial skills. A community sample of 492 children (49 % female) who ranged in age from 6 to 10 years (M = 8.6, SD = .93) was recruited. Teacher reports of children's inattention, and hyperactivity/impulsivity symptoms, conduct problems, prosocial skills and peer problems were collected in two consecutive school years. Elevated inattention and hyperactivity/impulsivity in Year-1 predicted greater peer problems in Year-2. Conduct problems in the first and second years of the study were associated with more peer problems, and explained a portion of the relationship between inattention and hyperactivity/impulsivity with peer problems. However, prosocial skills were associated with fewer peer problems in children with elevated inattention and hyperactivity/impulsivity. Inattention and hyperactivity/impulsivity have negative effects on children's peer functioning after 1-year, but concurrent conduct problems and prosocial skills have important and opposing impacts on these associations.

  13. Automation in high-content flow cytometry screening.

    Science.gov (United States)

    Naumann, U; Wand, M P

    2009-09-01

    High-content flow cytometric screening (FC-HCS) is a 21st Century technology that combines robotic fluid handling, flow cytometric instrumentation, and bioinformatics software, so that relatively large numbers of flow cytometric samples can be processed and analysed in a short period of time. We revisit a recent application of FC-HCS to the problem of cellular signature definition for acute graft-versus-host-disease. Our focus is on automation of the data processing steps using recent advances in statistical methodology. We demonstrate that effective results, on par with those obtained via manual processing, can be achieved using our automatic techniques. Such automation of FC-HCS has the potential to drastically improve diagnosis and biomarker identification.

  14. NACHOS: a finite element computer program for incompressible flow problems. Part I. Theoretical background

    International Nuclear Information System (INIS)

    Gartling, D.K.

    1978-04-01

    The theoretical background for the finite element computer program, NACHOS, is presented in detail. The NACHOS code is designed for the two-dimensional analysis of viscous incompressible fluid flows, including the effects of heat transfer. A general description of the fluid/thermal boundary value problems treated by the program is described. The finite element method and the associated numerical methods used in the NACHOS code are also presented. Instructions for use of the program are documented in SAND77-1334

  15. Groundwater flow through a natural fracture. Flow experiments and numerical modelling

    Energy Technology Data Exchange (ETDEWEB)

    Larsson, Erik [Chalmers Univ. of Technology, Goeteborg (Sweden). Dept of Geology

    1997-09-01

    Groundwater flow and transport play an important role not only for groundwater exploration but also in environmental engineering problems. This report considers how the hydraulic properties of fractures in crystalline rock depend on the fracture aperture geometry. Different numerical models are discussed and a FDM computer code for two- and three- dimensional flow-modelling has been developed. Different relations between the cells in the model are tested and compared with results in the literature. A laboratory experimental work has been done to carry out flow experiments and aperture measurements on the same specimen of a natural fracture. The drilled core sample had fractures parallel to the core axis and was placed inside a biaxial cell during the experiments. The water pressure gradient and the compression stress were varied during the experiments and also a tracer test was done. After the flow experiments, the aperture distribution for a certain compression was measured by injecting an epoxy resin into the fracture. The thickness of the resin layer was then studied in saw cut sections of the sample. The results from the experiments were used to validate numerical and analytical models, based on aperture distribution, for flow and transport simulations. In the disturbed zone around a drift both water and air are present in the fractures. The gas will go to the most wide part of the fracture because the capillarity and the conductivity decrease. The dependence of the effective conductivity on the variance of the conductivity and the effect of extinction of highly conductive cells has also been studied. A discussion of how gas in fractures around a drift can cause a skin effect is modelled and an example is given of what a saturation depending on the magnitude of the flow causes. 25 refs, 17 tabs, 43 figs.

  16. Groundwater flow through a natural fracture. Flow experiments and numerical modelling

    International Nuclear Information System (INIS)

    Larsson, Erik

    1997-09-01

    Groundwater flow and transport play an important role not only for groundwater exploration but also in environmental engineering problems. This report considers how the hydraulic properties of fractures in crystalline rock depend on the fracture aperture geometry. Different numerical models are discussed and a FDM computer code for two- and three- dimensional flow-modelling has been developed. Different relations between the cells in the model are tested and compared with results in the literature. A laboratory experimental work has been done to carry out flow experiments and aperture measurements on the same specimen of a natural fracture. The drilled core sample had fractures parallel to the core axis and was placed inside a biaxial cell during the experiments. The water pressure gradient and the compression stress were varied during the experiments and also a tracer test was done. After the flow experiments, the aperture distribution for a certain compression was measured by injecting an epoxy resin into the fracture. The thickness of the resin layer was then studied in saw cut sections of the sample. The results from the experiments were used to validate numerical and analytical models, based on aperture distribution, for flow and transport simulations. In the disturbed zone around a drift both water and air are present in the fractures. The gas will go to the most wide part of the fracture because the capillarity and the conductivity decrease. The dependence of the effective conductivity on the variance of the conductivity and the effect of extinction of highly conductive cells has also been studied. A discussion of how gas in fractures around a drift can cause a skin effect is modelled and an example is given of what a saturation depending on the magnitude of the flow causes

  17. Genetic Algorithm for Solving Location Problem in a Supply Chain Network with Inbound and Outbound Product Flows

    Directory of Open Access Journals (Sweden)

    Suprayogi Suprayogi

    2016-12-01

    Full Text Available This paper considers a location problem in a supply chain network. The problem addressed in this paper is motivated by an initiative to develop an efficient supply chain network for supporting the agricultural activities. The supply chain network consists of regions, warehouses, distribution centers, plants, and markets. The products include a set of inbound products and a set of outbound products. In this paper, definitions of the inbound and outbound products are seen from the region’s point of view.  The inbound product is the product demanded by regions and produced by plants which flows on a sequence of the following entities: plants, distribution centers, warehouses, and regions. The outbound product is the product demanded by markets and produced by regions and it flows on a sequence of the following entities: regions, warehouses, and markets. The problem deals with determining locations of the warehouses and the distribution centers to be opened and shipment quantities associated with all links on the network that minimizes the total cost. The problem can be considered as a strategic supply chain network problem. A solution approach based on genetic algorithm (GA is proposed. The proposed GA is examined using hypothetical instances and its results are compared to the solution obtained by solving the mixed integer linear programming (MILP model. The comparison shows that there is a small gap (0.23%, on average between the proposed GA and MILP model in terms of the total cost. The proposed GA consistently provides solutions with least total cost. In terms of total cost, based on the experiment, it is demonstrated that coefficients of variation are closed to 0.

  18. Applied multiphase flow in pipes and flow assurance oil and gas production

    CERN Document Server

    Al-Safran, Eissa M

    2017-01-01

    Applied Multiphase Flow in Pipes and Flow Assurance - Oil and Gas Production delivers the most recent advancements in multiphase flow technology while remaining easy to read and appropriate for undergraduate and graduate petroleum engineering students. Responding to the need for a more up-to-the-minute resource, this highly anticipated new book represents applications on the fundamentals with new material on heat transfer in production systems, flow assurance, transient multiphase flow in pipes and the TUFFP unified model. The complex computation procedure of mechanistic models is simplified through solution flowcharts and several example problems. Containing over 50 solved example problems and 140 homework problems, this new book will equip engineers with the skills necessary to use the latest steady-state simulators available.

  19. On the solution of the differential equation occurring in the problem of heat convection in laminar flow through a tube with slip—flow

    Directory of Open Access Journals (Sweden)

    Xanming Wang

    1996-01-01

    Full Text Available A technique is developed for evaluation of eigenvalues in solution of the differential equation d2y/dr2+(1/rdy/dr+λ2(β−r2y=0 which occurs in the problem of heat convection in laminar flow through a circular tube with silp-flow (β>1. A series solution requires the expansions of coeffecients involving extremely large numbers. No work has been reported in the case of β>1, because of its computational complexity in the evaluation of the eigenvalues. In this paper, a matrix was constructed and a computational algorithm was obtained to calculate the first four eigenvalues. Also, an asymptotic formula was developed to generate the full spectrum of eigenvalues. The computational results for various values of β were obtained.

  20. A new cut-based algorithm for the multi-state flow network reliability problem

    International Nuclear Information System (INIS)

    Yeh, Wei-Chang; Bae, Changseok; Huang, Chia-Ling

    2015-01-01

    Many real-world systems can be modeled as multi-state network systems in which reliability can be derived in terms of the lower bound points of level d, called d-minimal cuts (d-MCs). This study proposes a new method to find and verify obtained d-MCs with simple and useful found properties for the multi-state flow network reliability problem. The proposed algorithm runs in O(mσp) time, which represents a significant improvement over the previous O(mp 2 σ) time bound based on max-flow/min-cut, where p, σ and m denote the number of MCs, d-MC candidates and edges, respectively. The proposed algorithm also conquers the weakness of some existing methods, which failed to remove duplicate d-MCs in special cases. A step-by-step example is given to demonstrate how the proposed algorithm locates and verifies all d-MC candidates. As evidence of the utility of the proposed approach, we present extensive computational results on 20 benchmark networks in another example. The computational results compare favorably with a previously developed algorithm in the literature. - Highlights: • A new method is proposed to find all d-MCs for the multi-state flow networks. • The proposed method can prevent the generation of d-MC duplicates. • The proposed method is simpler and more efficient than the best-known algorithms

  1. A Locally Conservative Eulerian--Lagrangian Method for a Model Two-Phase Flow Problem in a One-Dimensional Porous Medium

    KAUST Repository

    Arbogast, Todd

    2012-01-01

    Motivated by possible generalizations to more complex multiphase multicomponent systems in higher dimensions, we develop an Eulerian-Lagrangian numerical approximation for a system of two conservation laws in one space dimension modeling a simplified two-phase flow problem in a porous medium. The method is based on following tracelines, so it is stable independent of any CFL constraint. The main difficulty is that it is not possible to follow individual tracelines independently. We approximate tracing along the tracelines by using local mass conservation principles and self-consistency. The two-phase flow problem is governed by a system of equations representing mass conservation of each phase, so there are two local mass conservation principles. Our numerical method respects both of these conservation principles over the computational mesh (i.e., locally), and so is a fully conservative traceline method. We present numerical results that demonstrate the ability of the method to handle problems with shocks and rarefactions, and to do so with very coarse spatial grids and time steps larger than the CFL limit. © 2012 Society for Industrial and Applied Mathematics.

  2. Efficient Bayesian inference of subsurface flow models using nested sampling and sparse polynomial chaos surrogates

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    An efficient Bayesian calibration method based on the nested sampling (NS) algorithm and non-intrusive polynomial chaos method is presented. Nested sampling is a Bayesian sampling algorithm that builds a discrete representation of the posterior distributions by iteratively re-focusing a set of samples to high likelihood regions. NS allows representing the posterior probability density function (PDF) with a smaller number of samples and reduces the curse of dimensionality effects. The main difficulty of the NS algorithm is in the constrained sampling step which is commonly performed using a random walk Markov Chain Monte-Carlo (MCMC) algorithm. In this work, we perform a two-stage sampling using a polynomial chaos response surface to filter out rejected samples in the Markov Chain Monte-Carlo method. The combined use of nested sampling and the two-stage MCMC based on approximate response surfaces provides significant computational gains in terms of the number of simulation runs. The proposed algorithm is applied for calibration and model selection of subsurface flow models. © 2013.

  3. Investigation of problems of closing of geophysical cracks in thermoelastic media in the case of flow of fluids with impurities

    Science.gov (United States)

    Martirosyan, A. N.; Davtyan, A. V.; Dinunts, A. S.; Martirosyan, H. A.

    2018-04-01

    The purpose of this article is to investigate a problem of closing cracks by building up a layer of sediments on surfaces of a crack in an infinite thermoelastic medium in the presence of a flow of fluids with impurities. The statement of the problem of closing geophysical cracks in the presence of a fluid flow is presented with regard to the thermoelastic stress and the influence of the impurity deposition in the liquid on the crack surfaces due to thermal diffusion at the fracture closure. The Wiener–Hopf method yields an analytical solution in the special case without friction. Numerical calculations are performed in this case and the dependence of the crack closure time on the coordinate is plotted. A similar spatial problem is also solved. These results generalize the results of previous studies of geophysical cracks and debris in rocks, where the closure of a crack due to temperature effects is studied without taking the elastic stresses into account.

  4. Problematic Technology Use in a clinical sample of children and adolescents. Personality and behavioral problems associated.

    Science.gov (United States)

    Alonso, Cristina; Romero, Estrella

    2017-03-01

    In parallel to the rapid growth of access to new technologies (NT) there has been an increase in the problematic use of the same, especially among children and adolescents. Although research in this field is increasing, the studies have mainly been developed in the community, and the characteristics associated with the problematic use of NT are unknown in samples that require clinical care. Therefore, the aim of this study is to analyze the relationship between problematic use of video games (UPV) and Internet (UPI) and personality traits and behavior problems in a clinical sample of children and adolescents. The sample consists of 88 patients who were examined in the clinical psychology consultation in the Mental Health Unit for Children and Adolescents of the University Hospital of Santiago de Compostela. Data were obtained from self-reports and rating scales filled out by parents. 31.8% of the participants present UPI and 18.2%, UPV. The children and adolescents with UPNT have lower levels of Openness to experience, Conscientiousness and Agreeableness and higher levels of Emotional instability, global Impulsivity and Externalizing behavior problems, as well as Attention and Thought problems. UPNT is a problem that emerges as an important issue in clinical care for children and adolescents, so its study in child and youth care units is needed. Understanding the psychopathological profile of children and adolescents with UPNT will allow for the development of differential and more specific interventions.

  5. Hierarchical modeling for rare event detection and cell subset alignment across flow cytometry samples.

    Directory of Open Access Journals (Sweden)

    Andrew Cron

    Full Text Available Flow cytometry is the prototypical assay for multi-parameter single cell analysis, and is essential in vaccine and biomarker research for the enumeration of antigen-specific lymphocytes that are often found in extremely low frequencies (0.1% or less. Standard analysis of flow cytometry data relies on visual identification of cell subsets by experts, a process that is subjective and often difficult to reproduce. An alternative and more objective approach is the use of statistical models to identify cell subsets of interest in an automated fashion. Two specific challenges for automated analysis are to detect extremely low frequency event subsets without biasing the estimate by pre-processing enrichment, and the ability to align cell subsets across multiple data samples for comparative analysis. In this manuscript, we develop hierarchical modeling extensions to the Dirichlet Process Gaussian Mixture Model (DPGMM approach we have previously described for cell subset identification, and show that the hierarchical DPGMM (HDPGMM naturally generates an aligned data model that captures both commonalities and variations across multiple samples. HDPGMM also increases the sensitivity to extremely low frequency events by sharing information across multiple samples analyzed simultaneously. We validate the accuracy and reproducibility of HDPGMM estimates of antigen-specific T cells on clinically relevant reference peripheral blood mononuclear cell (PBMC samples with known frequencies of antigen-specific T cells. These cell samples take advantage of retrovirally TCR-transduced T cells spiked into autologous PBMC samples to give a defined number of antigen-specific T cells detectable by HLA-peptide multimer binding. We provide open source software that can take advantage of both multiple processors and GPU-acceleration to perform the numerically-demanding computations. We show that hierarchical modeling is a useful probabilistic approach that can provide a

  6. Modeling the Hybrid Flow Shop Scheduling Problem Followed by an Assembly Stage Considering Aging Effects and Preventive Maintenance Activities

    Directory of Open Access Journals (Sweden)

    Seyyed Mohammad Hassan Hosseini

    2016-05-01

    Full Text Available Scheduling problem for the hybrid flow shop scheduling problem (HFSP followed by an assembly stage considering aging effects additional preventive and maintenance activities is studied in this paper. In this production system, a number of products of different kinds are produced. Each product is assembled with a set of several parts. The first stage is a hybrid flow shop to produce parts. All machines can process all kinds of parts in this stage but each machine can process only one part at the same time. The second stage is a single assembly machine or a single assembly team of workers. The aim is to schedule the parts on the machines and assembly sequence and also determine when the preventive maintenance activities get done in order to minimize the completion time of all products (makespan. A mathematical modeling is presented and its validation is shown by solving an example in small scale. Since this problem has been proved strongly NP-hard, in order to solve the problem in medium and large scale, four heuristic algorithms is proposed based on the Johnson’s algorithm. The numerical experiments are used to run the mathematical model and evaluate the performance of the proposed algorithms.

  7. Concurrent Flame Growth, Spread and Extinction over Composite Fabric Samples in Low Speed Purely Forced Flow in Microgravity

    Science.gov (United States)

    Zhao, Xiaoyang; T'ien, James S.; Ferkul, Paul V.; Olson, Sandra L.

    2015-01-01

    As a part of the NASA BASS and BASS-II experimental projects aboard the International Space Station, flame growth, spread and extinction over a composite cotton-fiberglass fabric blend (referred to as the SIBAL fabric) were studied in low-speed concurrent forced flows. The tests were conducted in a small flow duct within the Microgravity Science Glovebox. The fuel samples measured 1.2 and 2.2 cm wide and 10 cm long. Ambient oxygen was varied from 21% down to 16% and flow speed from 40 cm/s down to 1 cm/s. A small flame resulted at low flow, enabling us to observe the entire history of flame development including ignition, flame growth, steady spread (in some cases) and decay at the end of the sample. In addition, by decreasing flow velocity during some of the tests, low-speed flame quenching extinction limits were found as a function of oxygen percentage. The quenching speeds were found to be between 1 and 5 cm/s with higher speed in lower oxygen atmosphere. The shape of the quenching boundary supports the prediction by earlier theoretical models. These long duration microgravity experiments provide a rare opportunity for solid fuel combustion since microgravity time in ground-based facilities is generally not sufficient. This is the first time that a low-speed quenching boundary in concurrent spread is determined in a clean and unambiguous manner.

  8. Electrical discharge machining for vessel sample removal

    International Nuclear Information System (INIS)

    Litka, T.J.

    1993-01-01

    Due to aging-related problems or essential metallurgy information (plant-life extension or decommissioning) of nuclear plants, sample removal from vessels may be required as part of an examination. Vessel or cladding samples with cracks may be removed to determine the cause of cracking. Vessel weld samples may be removed to determine the weld metallurgy. In all cases, an engineering analysis must be done prior to sample removal to determine the vessel's integrity upon sample removal. Electrical discharge machining (EDM) is being used for in-vessel nuclear power plant vessel sampling. Machining operations in reactor coolant system (RCS) components must be accomplished while collecting machining chips that could cause damage if they become part of the flow stream. The debris from EDM is a fine talclike particulate (no chips), which can be collected by flushing and filtration

  9. The use of Trefftz functions for approximation of measurement data in an inverse problem of flow boiling in a minichannel

    Directory of Open Access Journals (Sweden)

    Hozejowski Leszek

    2012-04-01

    Full Text Available The paper is devoted to a computational problem of predicting a local heat transfer coefficient from experimental temperature data. The experimental part refers to boiling flow of a refrigerant in a minichannel. Heat is dissipated from heating alloy to the flowing liquid due to forced convection. The mathematical model of the problem consists of the governing Poisson equation and the proper boundary conditions. For accurate results it is required to smooth the measurements which was obtained by using Trefftz functions. The measurements were approximated with a linear combination of Trefftz functions. Due to the computational procedure in which the measurement errors are known, it was possible to smooth the data and also to reduce the residuals of approximation on the boundaries.

  10. The direct effects of inattention and hyperactivity/impulsivity on peer problems and mediating roles of prosocial and conduct problem behaviors in a community sample of children.

    Science.gov (United States)

    Andrade, Brendan F; Tannock, Rosemary

    2013-11-01

    This study tested whether children's symptoms of inattention and hyperactivity/impulsivity were associated with peer problems and whether these associations were mediated by conduct problems and prosocial behaviors. A community sample of 500 children, including 245 boys and 255 girls, who ranged in age from 6 to 9 years (M = 7.6, SD = 0.91) were recruited. Teachers' report of children's inattention, hyperactivity/impulsivity, conduct problems, prosocial behaviors, and peer problems was collected. Symptoms of inattention and hyperactivity/impulsivity were significantly positively associated with peer problems. Conduct problems were associated with more peer problems and prosocial behaviors with less peer problems. Conduct problems and prosocial behaviors partially mediated the association between hyperactivity/impulsivity and peer problems and fully mediated the inattention-peer problems association. Findings show that prosocial behaviors and conduct problems are important variables that account for some of the negative impact of symptoms of inattention and hyperactivity/impulsivity on peer functioning.

  11. Bubble-free on-chip continuous-flow polymerase chain reaction: concept and application.

    Science.gov (United States)

    Wu, Wenming; Kang, Kyung-Tae; Lee, Nae Yoon

    2011-06-07

    Bubble formation inside a microscale channel is a significant problem in general microfluidic experiments. The problem becomes especially crucial when performing a polymerase chain reaction (PCR) on a chip which is subject to repetitive temperature changes. In this paper, we propose a bubble-free sample injection scheme applicable for continuous-flow PCR inside a glass/PDMS hybrid microfluidic chip, and attempt to provide a theoretical basis concerning bubble formation and elimination. Highly viscous paraffin oil plugs are employed in both the anterior and posterior ends of a sample plug, completely encapsulating the sample and eliminating possible nucleation sites for bubbles. In this way, internal channel pressure is increased, and vaporization of the sample is prevented, suppressing bubble formation. Use of an oil plug in the posterior end of the sample plug aids in maintaining a stable flow of a sample at a constant rate inside a heated microchannel throughout the entire reaction, as compared to using an air plug. By adopting the proposed sample injection scheme, we demonstrate various practical applications. On-chip continuous-flow PCR is performed employing genomic DNA extracted from a clinical single hair root sample, and its D1S80 locus is successfully amplified. Also, chip reusability is assessed using a plasmid vector. A single chip is used up to 10 times repeatedly without being destroyed, maintaining almost equal intensities of the resulting amplicons after each run, ensuring the reliability and reproducibility of the proposed sample injection scheme. In addition, the use of a commercially-available and highly cost-effective hot plate as a potential candidate for the heating source is investigated.

  12. Solving groundwater flow problems by conjugate-gradient methods and the strongly implicit procedure

    Science.gov (United States)

    Hill, Mary C.

    1990-01-01

    The performance of the preconditioned conjugate-gradient method with three preconditioners is compared with the strongly implicit procedure (SIP) using a scalar computer. The preconditioners considered are the incomplete Cholesky (ICCG) and the modified incomplete Cholesky (MICCG), which require the same computer storage as SIP as programmed for a problem with a symmetric matrix, and a polynomial preconditioner (POLCG), which requires less computer storage than SIP. Although POLCG is usually used on vector computers, it is included here because of its small storage requirements. In this paper, published comparisons of the solvers are evaluated, all four solvers are compared for the first time, and new test cases are presented to provide a more complete basis by which the solvers can be judged for typical groundwater flow problems. Based on nine test cases, the following conclusions are reached: (1) SIP is actually as efficient as ICCG for some of the published, linear, two-dimensional test cases that were reportedly solved much more efficiently by ICCG; (2) SIP is more efficient than other published comparisons would indicate when common convergence criteria are used; and (3) for problems that are three-dimensional, nonlinear, or both, and for which common convergence criteria are used, SIP is often more efficient than ICCG, and is sometimes more efficient than MICCG.

  13. The Guderley problem revisited

    International Nuclear Information System (INIS)

    Ramsey, Scott D.; Kamm, James R.; Bolstad, John H.

    2009-01-01

    The self-similar converging-diverging shock wave problem introduced by Guderley in 1942 has been the source of numerous investigations since its publication. In this paper, we review the simplifications and group invariance properties that lead to a self-similar formulation of this problem from the compressible flow equations for a polytropic gas. The complete solution to the self-similar problem reduces to two coupled nonlinear eigenvalue problems: the eigenvalue of the first is the so-called similarity exponent for the converging flow, and that of the second is a trajectory multiplier for the diverging regime. We provide a clear exposition concerning the reflected shock configuration. Additionally, we introduce a new approximation for the similarity exponent, which we compare with other estimates and numerically computed values. Lastly, we use the Guderley problem as the basis of a quantitative verification analysis of a cell-centered, finite volume, Eulerian compressible flow algorithm.

  14. Ground-water flow in low permeability environments

    Science.gov (United States)

    Neuzil, Christopher E.

    1986-01-01

    Certain geologic media are known to have small permeability; subsurface environments composed of these media and lacking well developed secondary permeability have groundwater flow sytems with many distinctive characteristics. Moreover, groundwater flow in these environments appears to influence the evolution of certain hydrologic, geologic, and geochemical systems, may affect the accumulation of pertroleum and ores, and probably has a role in the structural evolution of parts of the crust. Such environments are also important in the context of waste disposal. This review attempts to synthesize the diverse contributions of various disciplines to the problem of flow in low-permeability environments. Problems hindering analysis are enumerated together with suggested approaches to overcoming them. A common thread running through the discussion is the significance of size- and time-scale limitations of the ability to directly observe flow behavior and make measurements of parameters. These limitations have resulted in rather distinct small- and large-scale approaches to the problem. The first part of the review considers experimental investigations of low-permeability flow, including in situ testing; these are generally conducted on temporal and spatial scales which are relatively small compared with those of interest. Results from this work have provided increasingly detailed information about many aspects of the flow but leave certain questions unanswered. Recent advances in laboratory and in situ testing techniques have permitted measurements of permeability and storage properties in progressively “tighter” media and investigation of transient flow under these conditions. However, very large hydraulic gradients are still required for the tests; an observational gap exists for typical in situ gradients. The applicability of Darcy's law in this range is therefore untested, although claims of observed non-Darcian behavior appear flawed. Two important nonhydraulic

  15. The time constrained multi-commodity network flow problem and its application to liner shipping network design

    DEFF Research Database (Denmark)

    Karsten, Christian Vad; Pisinger, David; Røpke, Stefan

    2015-01-01

    -commodity network flow problem with transit time constraints which puts limits on the duration of the transit of the commodities through the network. It is shown that for the particular application it does not increase the solution time to include the transit time constraints and that including the transit time...... is essential to offer customers a competitive product. © 2015 Elsevier Ltd. All rights reserved....

  16. A dual exterior point simplex type algorithm for the minimum cost network flow problem

    Directory of Open Access Journals (Sweden)

    Geranis George

    2009-01-01

    Full Text Available A new dual simplex type algorithm for the Minimum Cost Network Flow Problem (MCNFP is presented. The proposed algorithm belongs to a special 'exterior- point simplex type' category. Similarly to the classical network dual simplex algorithm (NDSA, this algorithm starts with a dual feasible tree-solution and reduces the primal infeasibility, iteration by iteration. However, contrary to the NDSA, the new algorithm does not always maintain a dual feasible solution. Instead, the new algorithm might reach a basic point (tree-solution outside the dual feasible area (exterior point - dual infeasible tree.

  17. Is Investment-Cash Flow Sensitivity Caused by the Agency Costs or Asymmetric Information? Evidence from the UK

    NARCIS (Netherlands)

    Pawlina, G.; Renneboog, L.D.R.

    2005-01-01

    We investigate the investment-cash flow sensitivity of a large sample of the UK listed firms and confirm that investment is strongly cash flow-sensitive.Is this suboptimal investment policy the result of agency problems when managers with high discretion overinvest, or of asymmetric information when

  18. Recent bibliography on analytical and sampling problems of a PWR primary coolant Suppl. 3

    International Nuclear Information System (INIS)

    Illy, H.

    1985-03-01

    The present supplement to the bibliography on analytical and sampling problems of PWR primary coolant covers the literature published in 1984 and includes some references overlooked in the previous volumes dealing with the publications of the last 10 years. References are devided into topics characterized by the following headlines: boric acid; chloride; chlorine; carbon dioxide; general; gas analysis; hydrogen isotopes; iodine; iodide; nitrogen; noble gases and radium; ammonia; ammonium; oxygen; other elements; radiation monitoring; reactor safety; sampling; water chemistry. Under a given subject bibliographical information is listed in alphabetical order of the authors. (V.N.)

  19. Stokes' second problem for magnetohydrodynamics flow in a Burgers' fluid: the cases γ = λ²/4 and γ>λ²/4.

    Directory of Open Access Journals (Sweden)

    Ilyas Khan

    Full Text Available The present work is concerned with exact solutions of Stokes second problem for magnetohydrodynamics (MHD flow of a Burgers' fluid. The fluid over a flat plate is assumed to be electrically conducting in the presence of a uniform magnetic field applied in outward transverse direction to the flow. The equations governing the flow are modeled and then solved using the Laplace transform technique. The expressions of velocity field and tangential stress are developed when the relaxation time satisfies the condition γ =  λ²/4 or γ> λ²/4. The obtained closed form solutions are presented in the form of simple or multiple integrals in terms of Bessel functions and terms with only Bessel functions. The numerical integration is performed and the graphical results are displayed for the involved flow parameters. It is found that the velocity decreases whereas the shear stress increases when the Hartmann number is increased. The solutions corresponding to the Stokes' first problem for hydrodynamic Burgers' fluids are obtained as limiting cases of the present solutions. Similar solutions for Stokes' second problem of hydrodynamic Burgers' fluids and those for Newtonian and Oldroyd-B fluids can also be obtained as limiting cases of these solutions.

  20. Solution of Inverse Problems using Bayesian Approach with Application to Estimation of Material Parameters in Darcy Flow

    Czech Academy of Sciences Publication Activity Database

    Domesová, Simona; Beres, Michal

    2017-01-01

    Roč. 15, č. 2 (2017), s. 258-266 ISSN 1336-1376 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : Bayesian statistics * Cross-Entropy method * Darcy flow * Gaussian random field * inverse problem Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://advances.utc.sk/index.php/AEEE/article/view/2236

  1. The effect of deformation on two-phase flow through proppant-packed fractured shale samples: A micro-scale experimental investigation

    Science.gov (United States)

    Arshadi, Maziar; Zolfaghari, Arsalan; Piri, Mohammad; Al-Muntasheri, Ghaithan A.; Sayed, Mohammed

    2017-07-01

    We present the results of an extensive micro-scale experimental investigation of two-phase flow through miniature, fractured reservoir shale samples that contained different packings of proppant grains. We investigated permeability reduction in the samples by conducting experiments under a wide range of net confining pressures. Three different proppant grain distributions in three individual fractured shale samples were studied: i) multi-layer, ii) uniform mono-layer, and iii) non-uniform mono-layer. We performed oil-displacing-brine (drainage) and brine-displacing-oil (imbibition) flow experiments in the proppant packs under net confining pressures ranging from 200 to 6000 psi. The flow experiments were performed using a state-of-the-art miniature core-flooding apparatus integrated with a high-resolution, X-ray microtomography system. We visualized fluid occupancies, proppant embedment, and shale deformation under different flow and stress conditions. We examined deformation of pore space within the proppant packs and its impact on permeability and residual trapping, proppant embedment due to changes in net confining stress, shale surface deformation, and disintegration of proppant grains at high stress conditions. In particular, geometrical deformation and two-phase flow effects within the proppant pack impacting hydraulic conductivity of the medium were probed. A significant reduction in effective oil permeability at irreducible water saturation was observed due to increase in confining pressure. We propose different mechanisms responsible for the observed permeability reduction in different fracture packings. Samples with dissimilar proppant grain distributions showed significantly different proppant embedment behavior. Thinner proppant layer increased embedment significantly and lowered the onset confining pressure of embedment. As confining stress was increased, small embedments caused the surface of the shale to fracture. The produced shale fragments were

  2. Hearing Problems

    Science.gov (United States)

    ... Read MoreDepression in Children and TeensRead MoreBMI Calculator Hearing ProblemsLoss in the ability to hear or discriminate ... This flow chart will help direct you if hearing loss is a problem for you or a ...

  3. Polygenic scores predict alcohol problems in an independent sample and show moderation by the environment.

    Science.gov (United States)

    Salvatore, Jessica E; Aliev, Fazil; Edwards, Alexis C; Evans, David M; Macleod, John; Hickman, Matthew; Lewis, Glyn; Kendler, Kenneth S; Loukola, Anu; Korhonen, Tellervo; Latvala, Antti; Rose, Richard J; Kaprio, Jaakko; Dick, Danielle M

    2014-04-10

    Alcohol problems represent a classic example of a complex behavioral outcome that is likely influenced by many genes of small effect. A polygenic approach, which examines aggregate measured genetic effects, can have predictive power in cases where individual genes or genetic variants do not. In the current study, we first tested whether polygenic risk for alcohol problems-derived from genome-wide association estimates of an alcohol problems factor score from the age 18 assessment of the Avon Longitudinal Study of Parents and Children (ALSPAC; n = 4304 individuals of European descent; 57% female)-predicted alcohol problems earlier in development (age 14) in an independent sample (FinnTwin12; n = 1162; 53% female). We then tested whether environmental factors (parental knowledge and peer deviance) moderated polygenic risk to predict alcohol problems in the FinnTwin12 sample. We found evidence for both polygenic association and for additive polygene-environment interaction. Higher polygenic scores predicted a greater number of alcohol problems (range of Pearson partial correlations 0.07-0.08, all p-values ≤ 0.01). Moreover, genetic influences were significantly more pronounced under conditions of low parental knowledge or high peer deviance (unstandardized regression coefficients (b), p-values (p), and percent of variance (R2) accounted for by interaction terms: b = 1.54, p = 0.02, R2 = 0.33%; b = 0.94, p = 0.04, R2 = 0.30%, respectively). Supplementary set-based analyses indicated that the individual top single nucleotide polymorphisms (SNPs) contributing to the polygenic scores were not individually enriched for gene-environment interaction. Although the magnitude of the observed effects are small, this study illustrates the usefulness of polygenic approaches for understanding the pathways by which measured genetic predispositions come together with environmental factors to predict complex behavioral outcomes.

  4. Condom-related problems among a racially diverse sample of young men who have sex with men.

    Science.gov (United States)

    Du Bois, Steve N; Emerson, Erin; Mustanski, Brian

    2011-10-01

    We described frequencies of condom-related problems in a racially diverse sample of young men who have sex with men (YMSM), and tested these condom-related problems as an explanation for racial disparities in HIV rates among YMSM. Participants were 119 YMSM from a longitudinal study of sexual minority health behaviors. Almost all participants (95.4%) experienced at least one condom error. On average, African American and non-African American YMSM experienced the same number of recent condom-related problems. Therefore, differences in condom-related problems are unlikely to explain racial disparities in HIV rates among YMSM. When serving YMSM, providers should both promote condom use and explain steps to correct condom use.

  5. RELIGIUSITAS DENGAN FLOW AKADEMIK PADA SISWA

    Directory of Open Access Journals (Sweden)

    Putri Saraswati

    2018-02-01

    Full Text Available Problems in education are students who experience boredom in the learning process, whereas in the learning required concentration, interest and motivation, which required students to experience flow. Flow itself blends in total concentration which refers to the solemn concept in religiosity. The purpose of this research is to know religiosity relationship with academic flow. The design of this study is non-experimental correlation type. The data retrieval technique uses cluster sampling technique. Totals subjects as many as 222 students in city of Malang. Data collection method used in this research is the scale of religiosity made by the researcher. Then the scale of academic flow using the scale of LIS (The flow inventory for student, the researchers add some items in the scale of LIS. Data analysis method used product moment. The results of data analysis obtained r = 0.508 p = 0.000 (sig <0.01 means that this study shows a significant positive relationship between religiosity and academic flow. While the effective contribution of religiosity to academic flow is 25.8% and the rest equal to 74.2%, influenced by other factors.

  6. Food insecurity and mental health problems among a community sample of young adults.

    Science.gov (United States)

    Pryor, Laura; Lioret, Sandrine; van der Waerden, Judith; Fombonne, Éric; Falissard, Bruno; Melchior, Maria

    2016-08-01

    Food insecurity has been found to be related to anxiety and depression; however, the association with other psychiatric disorders, particularly among young adults, is not well known. We examined whether food insecurity is independently associated with four common mental health problems among a community sample of young adults in France. Data are from the TEMPO longitudinal cohort study. In 1991, participants' parents provided information on health and family socioeconomic characteristics. In 2011, participants' (18-35 years) reported food insecurity, mental health symptoms, and socioeconomic conditions (n = 1214). Mental health problems ascertained included major depressive episode, suicidal ideation, attention deficit and hyperactivity disorder, and substance abuse and/or dependence (nicotine, alcohol and cannabis). Cross-sectional associations between food insecurity and mental health problems were tested using modified Poisson regressions, weighted by inverse probability weights (IPW) of exposure. This makes food insecure and not food insecure participants comparable on all characteristics including socioeconomic factors and past mental health problems. 8.5 % of young adults were food insecure. In IPW-controlled analyses, food insecurity was associated with increased levels of depression (RR = 2.01, 95 % CI 1.01-4.02), suicidal ideation (RR = 3.23, 95 % CI 1.55-6.75) and substance use problems (RR = 1.68, 95 % CI 1.15-2.46). Food insecurity co-occurs with depression, suicidal ideation and substance use problems in young adulthood. Our findings suggest that reductions in food insecurity during this important life period may help prevent mental health problems. Policies aiming to alleviate food insecurity should also address individuals' psychiatric problems, to prevent a lifelong vicious circle of poor mental health and low socioeconomic attainment.

  7. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    Science.gov (United States)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  8. Monitoring individual traffic flows within the ATLAS TDAQ network

    CERN Document Server

    Sjoen, R; Ciobotaru, M; Batraneanu, S M; Leahu, L; Martin, B; Al-Shabibi, A

    2010-01-01

    The ATLAS data acquisition system consists of four different networks interconnecting up to 2000 processors using up to 200 edge switches and five multi-blade chassis devices. The architecture of the system has been described in [1] and its operational model in [2]. Classical, SNMP-based, network monitoring provides statistics on aggregate traffic, but for performance monitoring and troubleshooting purposes there was an imperative need to identify and quantify single traffic flows. sFlow [3] is an industry standard based on statistical sampling which attempts to provide a solution to this. Due to the size of the ATLAS network, the collection and analysis of the sFlow data from all devices generates a data handling problem of its own. This paper describes how this problem is addressed by making it possible to collect and store data either centrally or distributed according to need. The methods used to present the results in a relevant fashion for system analysts are discussed and we explore the possibilities a...

  9. Machine Learning and Inverse Problem in Geodynamics

    Science.gov (United States)

    Shahnas, M. H.; Yuen, D. A.; Pysklywec, R.

    2017-12-01

    During the past few decades numerical modeling and traditional HPC have been widely deployed in many diverse fields for problem solutions. However, in recent years the rapid emergence of machine learning (ML), a subfield of the artificial intelligence (AI), in many fields of sciences, engineering, and finance seems to mark a turning point in the replacement of traditional modeling procedures with artificial intelligence-based techniques. The study of the circulation in the interior of Earth relies on the study of high pressure mineral physics, geochemistry, and petrology where the number of the mantle parameters is large and the thermoelastic parameters are highly pressure- and temperature-dependent. More complexity arises from the fact that many of these parameters that are incorporated in the numerical models as input parameters are not yet well established. In such complex systems the application of machine learning algorithms can play a valuable role. Our focus in this study is the application of supervised machine learning (SML) algorithms in predicting mantle properties with the emphasis on SML techniques in solving the inverse problem. As a sample problem we focus on the spin transition in ferropericlase and perovskite that may cause slab and plume stagnation at mid-mantle depths. The degree of the stagnation depends on the degree of negative density anomaly at the spin transition zone. The training and testing samples for the machine learning models are produced by the numerical convection models with known magnitudes of density anomaly (as the class labels of the samples). The volume fractions of the stagnated slabs and plumes which can be considered as measures for the degree of stagnation are assigned as sample features. The machine learning models can determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at mid-mantle depths. Employing support vector machine (SVM) algorithms we show that SML techniques

  10. Trends and perspectives of flow injection/sequential injection on-line sample-pretreatment schemes coupled to ETAAS

    DEFF Research Database (Denmark)

    Wang, Jianhua; Hansen, Elo Harald

    2005-01-01

    Flow injection (FI) analysis, the first generation of this technique, became in the 1990s supplemented by its second generation, sequential injection (SI), and most recently by the third generation (i.e.,Lab-on-Valve). The dominant role played by FI in automatic, on-line, sample pretreatments in ...

  11. Analytical solution to the circularity problem in the discounted cash flow valuation framework

    Directory of Open Access Journals (Sweden)

    Felipe Mejía-Peláez

    2011-12-01

    Full Text Available In this paper we propose an analytical solution to the circularity problem between value and cost of capital. Our solution is derived starting from a central principle of finance that relates value today to value, cash flow, and the discount rate for next period. We present a general formulation without circularity for the equity value (E, cost of levered equity (Ke, levered firm value (V, and the weighted average cost of capital (WACC. We furthermore compare the results obtained from these formulas with the results of the application of the Adjusted Present Value approach (no circularity and the iterative solution of circularity based upon the iteration feature of a spreadsheet, concluding that all methods yield exactly the same answer. The advantage of this solution is that it avoids problems such as using manual methods (i.e., the popular “Rolling WACC” ignoring the circularity issue, setting a target leverage (usually constant with the inconsistencies that result from it, the wrong use of book values, or attributing the discrepancies in values to rounding errors.

  12. Monitoring individual traffic flows within the ATLAS TDAQ network

    International Nuclear Information System (INIS)

    Sjoen, R; Batraneanu, S M; Leahu, L; Martin, B; Al-Shabibi, A; Stancu, S; Ciobotaru, M

    2010-01-01

    The ATLAS data acquisition system consists of four different networks interconnecting up to 2000 processors using up to 200 edge switches and five multi-blade chassis devices. The architecture of the system has been described in [1] and its operational model in [2]. Classical, SNMP-based, network monitoring provides statistics on aggregate traffic, but for performance monitoring and troubleshooting purposes there was an imperative need to identify and quantify single traffic flows. sFlow [3] is an industry standard based on statistical sampling which attempts to provide a solution to this. Due to the size of the ATLAS network, the collection and analysis of the sFlow data from all devices generates a data handling problem of its own. This paper describes how this problem is addressed by making it possible to collect and store data either centrally or distributed according to need. The methods used to present the results in a relevant fashion for system analysts are discussed and we explore the possibilities and limitations of this diagnostic tool, giving an example of its use in solving system problems that arise during the ATLAS data taking.

  13. Microscopic Holography for flow over rough plate

    Science.gov (United States)

    Talapatra, Siddharth; Hong, Jiarong; Lu, Yuan; Katz, Joseph

    2008-11-01

    Our objective is to measure the near wall flow structures in a turbulent channel flow over a rough wall. In-line microscopic holographic PIV can resolve the 3-D flow field in a small sample volume, but recording holograms through a rough surface is a challenge. To solve this problem, we match the refractive indices of the fluid with that of the wall. Proof of concept tests involve an acrylic plate containing uniformly distributed, closely packed 0.45mm high pyramids with slope angle of 22^^o located within a concentrated sodium iodide solution. Holograms recorded by a 4864 x 3248 pixel digital camera at 10X magnification provide a field of view of 3.47mm x 2.32mm and pixel resolution of 0.714 μm. Due to index matching, reconstructed seed particles can be clearly seen over the entire volume, with only faint traces with the rough wall that can be removed. Planned experiments will be performed in a 20 x 5 cm rectangular channel with the top and bottom plates having the same roughness as the sample plate.

  14. Transformation of Commercial Flows into Physical Flows of Electricity – Flow Based Method

    Directory of Open Access Journals (Sweden)

    M. Adamec

    2009-01-01

    Full Text Available We are witnesses of large – scale electricity transport between European countries under the umbrella of the UCTE organization. This is due to the inabilyof generators to satisfy the growing consumption in some regions. In this content, we distinguish between two types of flow. The first type is physical flow, which causes costs in the transmission grid, whilst the second type is commercial flow, which provides revenues for the market participants. The old methods for allocating transfer capacity fail to take this duality into account. The old methods that allocate transmission border capacity to “virtual” commercial flows which, in fact, will not flow over this border, do not lead to optimal allocation. Some flows are uselessly rejected and conversely, some accepted flows can cause congestion on another border. The Flow Based Allocation method (FBA is a method which aims to solve this problem.Another goal of FBA is to ensure sustainable development of expansion of transmission capacity. Transmission capacity is important, because it represents a way to establish better transmission system stability, and it provides a distribution channel for electricity to customers abroad. For optimal development, it is necessary to ensure the right division of revenue allocation among the market participants.This paper contains a brief description of the FBA method. Problems of revenue maximization and optimal revenue distribution are mentioned. 

  15. Exact solutions to traffic density estimation problems involving the Lighthill-Whitham-Richards traffic flow model using mixed integer programming

    KAUST Repository

    Canepa, Edward S.; Claudel, Christian G.

    2012-01-01

    This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. © 2012 IEEE.

  16. Exact solutions to traffic density estimation problems involving the Lighthill-Whitham-Richards traffic flow model using mixed integer programming

    KAUST Repository

    Canepa, Edward S.

    2012-09-01

    This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. © 2012 IEEE.

  17. Dynamic Flow-through Methods for Metal Fractionation in Environmental Solid Samples

    DEFF Research Database (Denmark)

    Miró, Manuel; Hansen, Elo Harald; Petersen, Roongrat

    occurring processes always take place under dynamic conditions, recent trends have been focused on the development of alternative flow-through dynamic methods aimed at mimicking environmental events more correctly than their classical extraction counterparts. In this lecture particular emphasis is paid......Accummulation of metal ions in different compartments of the biosphere and their possible mobilization under changing environmental conditions induce a pertubation of the ecosystem and may cause adverse health effects. Nowadays, it is widely recognized that the information on total content...... the ecotoxicological significance of metal ions in solid environmental samples. The background of end-over-end fractionation for releasing metal species bound to particular soil phases is initially discussed, its relevant features and limitations being thoroughly described. However, taking into account that naturally...

  18. Potentiometric chip-based multipumping flow system for the simultaneous determination of fluoride, chloride, pH, and redox potential in water samples.

    Science.gov (United States)

    Chango, Gabriela; Palacio, Edwin; Cerdà, Víctor

    2018-08-15

    A simple potentiometric chip-based multipumping flow system (MPFS) has been developed for the simultaneous determination of fluoride, chloride, pH, and redox potential in water samples. The proposed system was developed by using a poly(methyl methacrylate) chip microfluidic-conductor using the advantages of flow techniques with potentiometric detection. For this purpose, an automatic system has been designed and built by optimizing the variables involved in the process, such as: pH, ionic strength, stirring and sample volume. This system was applied successfully to water samples getting a versatile system with an analysis frequency of 12 samples per hour. Good correlation between chloride and fluoride concentration measured with ISE and ionic chromatography technique suggests satisfactory reliability of the system. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. High-order multi-implicit spectral deferred correction methods for problems of reactive flow

    International Nuclear Information System (INIS)

    Bourlioux, Anne; Layton, Anita T.; Minion, Michael L.

    2003-01-01

    Models for reacting flow are typically based on advection-diffusion-reaction (A-D-R) partial differential equations. Many practical cases correspond to situations where the relevant time scales associated with each of the three sub-processes can be widely different, leading to disparate time-step requirements for robust and accurate time-integration. In particular, interesting regimes in combustion correspond to systems in which diffusion and reaction are much faster processes than advection. The numerical strategy introduced in this paper is a general procedure to account for this time-scale disparity. The proposed methods are high-order multi-implicit generalizations of spectral deferred correction methods (MISDC methods), constructed for the temporal integration of A-D-R equations. Spectral deferred correction methods compute a high-order approximation to the solution of a differential equation by using a simple, low-order numerical method to solve a series of correction equations, each of which increases the order of accuracy of the approximation. The key feature of MISDC methods is their flexibility in handling several sub-processes implicitly but independently, while avoiding the splitting errors present in traditional operator-splitting methods and also allowing for different time steps for each process. The stability, accuracy, and efficiency of MISDC methods are first analyzed using a linear model problem and the results are compared to semi-implicit spectral deferred correction methods. Furthermore, numerical tests on simplified reacting flows demonstrate the expected convergence rates for MISDC methods of orders three, four, and five. The gain in efficiency by independently controlling the sub-process time steps is illustrated for nonlinear problems, where reaction and diffusion are much stiffer than advection. Although the paper focuses on this specific time-scales ordering, the generalization to any ordering combination is straightforward

  20. Route optimisation and solving Zermelo’s navigation problem during long distance migration in cross flows

    DEFF Research Database (Denmark)

    Hays, Graeme C.; Christensen, Asbjørn; Fossette, Sabrina

    2014-01-01

    The optimum path to follow when subjected to cross flows was first considered over 80 years ago by the German mathematician Ernst Zermelo, in the context of a boat being displaced by ocean currents, and has become known as the ‘Zermelo navigation problem’. However, the ability of migrating animals...... to solve this problem has received limited consideration, even though wind and ocean currents cause the lateral displacement of flyers and swimmers, respectively, particularly during long-distance journeys of 1000s of kilometres. Here, we examine this problem by combining long-distance, open-ocean marine...... not follow the optimum (Zermelo's) route. Even though adult marine turtles regularly complete incredible long-distance migrations, these vertebrates primarily rely on course corrections when entering neritic waters during the final stages of migration. Our work introduces a new perspective in the analysis...

  1. SIPPI: A Matlab toolbox for sampling the solution to inverse problems with complex prior information

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Cordua, Knud Skou; Looms, Majken Caroline

    2013-01-01

    We present an application of the SIPPI Matlab toolbox, to obtain a sample from the a posteriori probability density function for the classical tomographic inversion problem. We consider a number of different forward models, linear and non-linear, such as ray based forward models that rely...

  2. Computer simulation of hopper flow

    International Nuclear Information System (INIS)

    Potapov, A.V.; Campbell, C.S.

    1996-01-01

    This paper describes two-dimensional computer simulations of granular flow in plane hoppers. The simulations can reproduce an experimentally observed asymmetric unsteadiness for monodispersed particle sizes, but also could eliminate it by adding a small amount of polydispersity. This appears to be a result of the strong packings that may be formed by monodispersed particles and is thus a noncontinuum effect. The internal stress state was also sampled, which among other things, allows an evaluation of common assumptions made in granular material models. These showed that the internal friction coefficient is far from a constant, which is in contradiction to common models based on plasticity theory which assume that the material is always at the point of imminent yield. Furthermore, it is demonstrated that rapid granular flow theory, another common modeling technique, is inapplicable to this problem even near the exit where the flow is moving its fastest. copyright 1996 American Institute of Physics

  3. Heat transfer and fluid flow in regular rod arrays with opposing flow

    International Nuclear Information System (INIS)

    Yang, J.W.

    1979-01-01

    The heat transfer and fluid flow problem of opposing flow in the fully developed laminar region has been solved analytically for regular rod arrays. The problem is governed by two parameters: the pitch-to-diameter ratio and the Grashof-to-Reynolds number ratio. The critical Gr/Re ratios for flow separation caused by the upward buoyancy force on the downward flow were evaluated for a large range of P/D ratios of the triangular array. Numerical results reveal that both the heat transfer and pressure loss are reduced by the buoyancy force. Applications to nuclear reactors are discussed

  4. Generalized network improvement and packing problems

    CERN Document Server

    Holzhauser, Michael

    2016-01-01

    Michael Holzhauser discusses generalizations of well-known network flow and packing problems by additional or modified side constraints. By exploiting the inherent connection between the two problem classes, the author investigates the complexity and approximability of several novel network flow and packing problems and presents combinatorial solution and approximation algorithms. Contents Fractional Packing and Parametric Search Frameworks Budget-Constrained Minimum Cost Flows: The Continuous Case Budget-Constrained Minimum Cost Flows: The Discrete Case Generalized Processing Networks Convex Generalized Flows Target Groups Researchers and students in the fields of mathematics, computer science, and economics Practitioners in operations research and logistics The Author Dr. Michael Holzhauser studied computer science at the University of Kaiserslautern and is now a research fellow in the Optimization Research Group at the Department of Mathematics of the University of Kaiserslautern.

  5. Polygenic Scores Predict Alcohol Problems in an Independent Sample and Show Moderation by the Environment

    Directory of Open Access Journals (Sweden)

    Jessica E. Salvatore

    2014-04-01

    Full Text Available Alcohol problems represent a classic example of a complex behavioral outcome that is likely influenced by many genes of small effect. A polygenic approach, which examines aggregate measured genetic effects, can have predictive power in cases where individual genes or genetic variants do not. In the current study, we first tested whether polygenic risk for alcohol problems—derived from genome-wide association estimates of an alcohol problems factor score from the age 18 assessment of the Avon Longitudinal Study of Parents and Children (ALSPAC; n = 4304 individuals of European descent; 57% female—predicted alcohol problems earlier in development (age 14 in an independent sample (FinnTwin12; n = 1162; 53% female. We then tested whether environmental factors (parental knowledge and peer deviance moderated polygenic risk to predict alcohol problems in the FinnTwin12 sample. We found evidence for both polygenic association and for additive polygene-environment interaction. Higher polygenic scores predicted a greater number of alcohol problems (range of Pearson partial correlations 0.07–0.08, all p-values ≤ 0.01. Moreover, genetic influences were significantly more pronounced under conditions of low parental knowledge or high peer deviance (unstandardized regression coefficients (b, p-values (p, and percent of variance (R2 accounted for by interaction terms: b = 1.54, p = 0.02, R2 = 0.33%; b = 0.94, p = 0.04, R2 = 0.30%, respectively. Supplementary set-based analyses indicated that the individual top single nucleotide polymorphisms (SNPs contributing to the polygenic scores were not individually enriched for gene-environment interaction. Although the magnitude of the observed effects are small, this study illustrates the usefulness of polygenic approaches for understanding the pathways by which measured genetic predispositions come together with environmental factors to predict complex behavioral outcomes.

  6. Assessment of the Utility of Cytology and Flow Cytometry of Cerebrospinal Fluid Samples in Clinical Practice.

    Science.gov (United States)

    Nam, Anna S; Giorgadze, Tamara; Tam, Wayne; Chadburn, Amy

    2018-01-01

    We sought to assess the utility and limitations of both flow cytometry (FC) and cytology for the analysis of cerebrospinal fluid (CSF) in a practical clinical setting. A total of 393 consecutive CSF samples from 171 patients submitted for both cytomorphologic and FC assessments were analyzed. Both FC and cytology findings were negative for malignancy in 315/393 samples (80%), and either positive (POS) or suspicious/atypical (SUSP/AT) in 7% of samples. This resulted in high agreement between FC and cytology (87%). Minor discrepancies were present in 4% of the cases. In 28 samples, an abnormal population was detected by FC but not by cytology. FC and cytology are important complementary methods for analyzing CSF samples. In cases where cytology is SUSP/AT and FC is inconclusive or negative, additional specimens should be submitted for immunostaining, cytogenetics, and/or molecular studies. © 2018 S. Karger AG, Basel.

  7. Flow injection determination of lead and cadmium in hair samples from workers exposed to welding fumes

    International Nuclear Information System (INIS)

    Cespon-Romero, R.M.; Yebra-Biurrun, M.C.

    2007-01-01

    A flow injection procedure involving continuous acid leaching for lead and cadmium determination in hair samples of persons in permanent contact with a polluted workplace environment by flame atomic absorption spectrometry is proposed. Variables such as sonication time, nature and concentration of the acid solution used as leaching solution, leaching temperature, flow-rate of the continuous manifold, leaching solution volume and hair particle size were simultaneously studied by applying a Plackett-Burman design approach. Results showed that nitric acid concentration (leaching solution), leaching temperature and sonication time were statistically significant variables (confidence interval of 95%). These last two variables were finally optimised by using a central composite design. The proposed procedure allowed the determination of cadmium and lead with limits of detection 0.1 and 1.0 μg g -1 , respectively. The accuracy of the developed procedure was evaluated by the analysis of a certified reference material (CRM 397, human hair, from the BCR). The proposed method was applied with satisfactory results to the determination of Cd and Pb in human hair samples of workers exposed to welding fumes

  8. MPSalsa a finite element computer program for reacting flow problems. Part 2 - user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Salinger, A.; Devine, K.; Hennigan, G.; Moffat, H. [and others

    1996-09-01

    This manual describes the use of MPSalsa, an unstructured finite element (FE) code for solving chemically reacting flow problems on massively parallel computers. MPSalsa has been written to enable the rigorous modeling of the complex geometry and physics found in engineering systems that exhibit coupled fluid flow, heat transfer, mass transfer, and detailed reactions. In addition, considerable effort has been made to ensure that the code makes efficient use of the computational resources of massively parallel (MP), distributed memory architectures in a way that is nearly transparent to the user. The result is the ability to simultaneously model both three-dimensional geometries and flow as well as detailed reaction chemistry in a timely manner on MT computers, an ability we believe to be unique. MPSalsa has been designed to allow the experienced researcher considerable flexibility in modeling a system. Any combination of the momentum equations, energy balance, and an arbitrary number of species mass balances can be solved. The physical and transport properties can be specified as constants, as functions, or taken from the Chemkin library and associated database. Any of the standard set of boundary conditions and source terms can be adapted by writing user functions, for which templates and examples exist.

  9. Drop-on-demand sample introduction system coupled with the flowing atmospheric-pressure afterglow for direct molecular analysis of complex liquid microvolume samples.

    Science.gov (United States)

    Schaper, J Niklas; Pfeuffer, Kevin P; Shelley, Jacob T; Bings, Nicolas H; Hieftje, Gary M

    2012-11-06

    One of the fastest developing fields in analytical spectrochemistry in recent years is ambient desorption/ionization mass spectrometry (ADI-MS). This burgeoning interest has been due to the demonstrated advantages of the method: simple mass spectra, little or no sample preparation, and applicability to samples in the solid, liquid, or gaseous state. One such ADI-MS source, the flowing atmospheric-pressure afterglow (FAPA), is capable of direct analysis of solids just by aiming the source at the solid surface and sampling the produced ions into a mass spectrometer. However, direct introduction of significant volumes of liquid samples into this source has not been possible, as solvent loads can quench the afterglow and, thus, the formation of reagent ions. As a result, the analysis of liquid samples is preferably carried out by analyzing dried residues or by desorbing small amounts of liquid samples directly from the liquid surface. In the former case, reproducibility of sample introduction is crucial if quantitative results are desired. In the present study, introduction of liquid samples as very small droplets helps overcome the issues of sample positioning and reduced levels of solvent intake. A recently developed "drop-on-demand" (DOD) aerosol generator is capable of reproducibly producing very small volumes of liquid (∼17 pL). In this paper, the coupling of FAPA-MS and DOD is reported and applications are suggested. Analytes representing different classes of substances were tested and limits of detections were determined. Matrix tolerance was investigated for drugs of abuse and their metabolites by analyzing raw urine samples and quantification without the use of internal standards. Limits of detection below 2 μg/mL, without sample pretreatment, were obtained.

  10. Flow, transport and diffusion in random geometries II: applications

    KAUST Repository

    Asinari, Pietro

    2015-01-07

    Multilevel Monte Carlo (MLMC) is an efficient and flexible solution for the propagation of uncertainties in complex models, where an explicit parametrization of the input randomness is not available or too expensive. We present several applications of our MLMC algorithm for flow, transport and diffusion in random heterogeneous materials. The absolute permeability and effective diffusivity (or formation factor) of micro-scale porous media samples are computed and the uncertainty related to the sampling procedures is studied. The algorithm is then extended to the transport problems and multiphase flows for the estimation of dispersion and relative permeability curves. The impact of water drops on random stuctured surfaces, with microfluidics applications to self-cleaning materials, is also studied and simulated. Finally the estimation of new drag correlation laws for poly-dispersed dilute and dense suspensions is presented.

  11. Flow, transport and diffusion in random geometries II: applications

    KAUST Repository

    Asinari, Pietro; Ceglia, Diego; Icardi, Matteo; Prudhomme, Serge; Tempone, Raul

    2015-01-01

    Multilevel Monte Carlo (MLMC) is an efficient and flexible solution for the propagation of uncertainties in complex models, where an explicit parametrization of the input randomness is not available or too expensive. We present several applications of our MLMC algorithm for flow, transport and diffusion in random heterogeneous materials. The absolute permeability and effective diffusivity (or formation factor) of micro-scale porous media samples are computed and the uncertainty related to the sampling procedures is studied. The algorithm is then extended to the transport problems and multiphase flows for the estimation of dispersion and relative permeability curves. The impact of water drops on random stuctured surfaces, with microfluidics applications to self-cleaning materials, is also studied and simulated. Finally the estimation of new drag correlation laws for poly-dispersed dilute and dense suspensions is presented.

  12. Analytical solution for the problem of maximum exit velocity under Coulomb friction in gravity flow discharge chutes

    Energy Technology Data Exchange (ETDEWEB)

    Salinic, Slavisa [University of Kragujevac, Faculty of Mechanical Engineering, Kraljevo (RS)

    2010-10-15

    In this paper, an analytical solution for the problem of finding profiles of gravity flow discharge chutes required to achieve maximum exit velocity under Coulomb friction is obtained by application of variational calculus. The model of a particle which moves down a rough curve in a uniform gravitational field is used to obtain a solution of the problem for various boundary conditions. The projection sign of the normal reaction force of the rough curve onto the normal to the curve and the restriction requiring that the tangential acceleration be non-negative are introduced as the additional constraints in the form of inequalities. These inequalities are transformed into equalities by introducing new state variables. Although this is fundamentally a constrained variational problem, by further introducing a new functional with an expanded set of unknown functions, it is transformed into an unconstrained problem where broken extremals appear. The obtained equations of the chute profiles contain a certain number of unknown constants which are determined from a corresponding system of nonlinear algebraic equations. The obtained results are compared with the known results from the literature. (orig.)

  13. Asymptotic theory of two-dimensional trailing-edge flows

    Science.gov (United States)

    Melnik, R. E.; Chow, R.

    1975-01-01

    Problems of laminar and turbulent viscous interaction near trailing edges of streamlined bodies are considered. Asymptotic expansions of the Navier-Stokes equations in the limit of large Reynolds numbers are used to describe the local solution near the trailing edge of cusped or nearly cusped airfoils at small angles of attack in compressible flow. A complicated inverse iterative procedure, involving finite-difference solutions of the triple-deck equations coupled with asymptotic solutions of the boundary values, is used to accurately solve the viscous interaction problem. Results are given for the correction to the boundary-layer solution for drag of a finite flat plate at zero angle of attack and for the viscous correction to the lift of an airfoil at incidence. A rational asymptotic theory is developed for treating turbulent interactions near trailing edges and is shown to lead to a multilayer structure of turbulent boundary layers. The flow over most of the boundary layer is described by a Lighthill model of inviscid rotational flow. The main features of the model are discussed and a sample solution for the skin friction is obtained and compared with the data of Schubauer and Klebanoff for a turbulent flow in a moderately large adverse pressure gradient.

  14. The role of self-esteem in the development of psychiatric problems: a three-year prospective study in a clinical sample of adolescents.

    Science.gov (United States)

    Henriksen, Ingvild Oxås; Ranøyen, Ingunn; Indredavik, Marit Sæbø; Stenseng, Frode

    2017-01-01

    Self-esteem is fundamentally linked to mental health, but its' role in trajectories of psychiatric problems is unclear. In particular, few studies have addressed the role of self-esteem in the development of attention problems. Hence, we examined the role of global self-esteem in the development of symptoms of anxiety/depression and attention problems, simultaneously, in a clinical sample of adolescents while accounting for gender, therapy, and medication. Longitudinal data were obtained from a sample of 201 adolescents-aged 13-18-referred to the Department of Child and Adolescent Psychiatry in Trondheim, Norway. In the baseline study, self-esteem, and symptoms of anxiety/depression and attention problems were measured by means of self-report. Participants were reassessed 3 years later, with a participation rate of 77% in the clinical sample. Analyses showed that high self-esteem at baseline predicted fewer symptoms of both anxiety/depression and attention problems 3 years later after controlling for prior symptom levels, gender, therapy (or not), and medication. Results highlight the relevance of global self-esteem in the clinical practice, not only with regard to emotional problems, but also to attention problems. Implications for clinicians, parents, and others are discussed.

  15. Multicommuted flow injection method for fast photometric determination of phenolic compounds in commercial virgin olive oil samples.

    Science.gov (United States)

    Lara-Ortega, Felipe J; Sainz-Gonzalo, Francisco J; Gilbert-López, Bienvenida; García-Reyes, Juan F; Molina-Díaz, Antonio

    2016-01-15

    A multicommuted flow injection method has been developed for the determination of phenolic species in virgin olive oil samples. The method is based on the inhibitory effect of antioxidants on a stable and colored radical cation formation from the colorless compound N,N-dimethyl-p-phenylenediamine (DMPD(•+)) in acidic medium in the presence of Fe(III) as oxidant. The signal inhibition by phenolic species and other antioxidants is proportional to their concentration in the olive oil sample. Absorbance was recorded at 515nm by means of a modular fiber optic spectrometer. Oleuropein was used as the standard for phenols determination and 6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid (trolox) was the reference standard used for total antioxidant content calculation. Linear response was observed within the range of 250-1000mg/kg oleuropein, which was in accordance with phenolic contents observed in commercial extra virgin olive oil in the present study. Fast and low-volume liquid-liquid extraction of the samples using 60% MeOH was made previous to their insertion in the flow multicommuted system. The five three-way solenoid valves used for multicommuted liquid handling were controlled by a homemade electronic interface and Java-written software. The proposed approach was applied to different commercial extra virgin olive oil samples and the results were consistent with those obtained by the Folin Ciocalteu (FC) method. Total time for the sample preparation and the analysis required in the present approach can be drastically reduced: the throughput of the present analysis is 8 samples/h in contrast to 1sample/h of the conventional FC method. The present method is easy to implement in routine analysis and can be regarded as a feasible alternative to FC method. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Active-Set Reduced-Space Methods with Nonlinear Elimination for Two-Phase Flow Problems in Porous Media

    KAUST Repository

    Yang, Haijian

    2016-07-26

    Fully implicit methods are drawing more attention in scientific and engineering applications due to the allowance of large time steps in extreme-scale simulations. When using a fully implicit method to solve two-phase flow problems in porous media, one major challenge is the solution of the resultant nonlinear system at each time step. To solve such nonlinear systems, traditional nonlinear iterative methods, such as the class of the Newton methods, often fail to achieve the desired convergent rate due to the high nonlinearity of the system and/or the violation of the boundedness requirement of the saturation. In the paper, we reformulate the two-phase model as a variational inequality that naturally ensures the physical feasibility of the saturation variable. The variational inequality is then solved by an active-set reduced-space method with a nonlinear elimination preconditioner to remove the high nonlinear components that often causes the failure of the nonlinear iteration for convergence. To validate the effectiveness of the proposed method, we compare it with the classical implicit pressure-explicit saturation method for two-phase flow problems with strong heterogeneity. The numerical results show that our nonlinear solver overcomes the often severe limits on the time step associated with existing methods, results in superior convergence performance, and achieves reduction in the total computing time by more than one order of magnitude.

  17. Active-Set Reduced-Space Methods with Nonlinear Elimination for Two-Phase Flow Problems in Porous Media

    KAUST Repository

    Yang, Haijian; Yang, Chao; Sun, Shuyu

    2016-01-01

    Fully implicit methods are drawing more attention in scientific and engineering applications due to the allowance of large time steps in extreme-scale simulations. When using a fully implicit method to solve two-phase flow problems in porous media, one major challenge is the solution of the resultant nonlinear system at each time step. To solve such nonlinear systems, traditional nonlinear iterative methods, such as the class of the Newton methods, often fail to achieve the desired convergent rate due to the high nonlinearity of the system and/or the violation of the boundedness requirement of the saturation. In the paper, we reformulate the two-phase model as a variational inequality that naturally ensures the physical feasibility of the saturation variable. The variational inequality is then solved by an active-set reduced-space method with a nonlinear elimination preconditioner to remove the high nonlinear components that often causes the failure of the nonlinear iteration for convergence. To validate the effectiveness of the proposed method, we compare it with the classical implicit pressure-explicit saturation method for two-phase flow problems with strong heterogeneity. The numerical results show that our nonlinear solver overcomes the often severe limits on the time step associated with existing methods, results in superior convergence performance, and achieves reduction in the total computing time by more than one order of magnitude.

  18. Recent bibliography on analytical and sampling problems of a PWR primary coolant Pt. 1

    International Nuclear Information System (INIS)

    Illy, H.

    1981-12-01

    The first bibliography on analytical and sampling problems of a PWR primary coolant (KFKI Report-1980-48) was published in 1980 and it covered the literature published in the previous 8-10 years. The present supplement reviews the subsequent literature up till December 1981. It also includes some references overlooked in the first volume. The serial numbers are continued from the first bibliography. (author)

  19. Detection technique of radioactive tracer and its application to the flow problems

    International Nuclear Information System (INIS)

    Sato, Otomaru; Kato, Masao

    1978-01-01

    With a radioactive tracer experiment the nature of the system and the precision are the two key factors to determine the amount of the required tracer. It should be kept as low as possible to meet environmental regulations. The former factor is concerned with the isotope dilution during the experiment and the latter with counting techniques. In part 1, some counting techniques are investigated while three field experiments are described in part 2. Chemical treatments of water sample are described firstly in part 1. Recovery of the order of 95% was achieved with 24 Na, 131 I and 82 Br by either ion exchange or precipitation technique. Three direct γ-ray counting techniques are investigated secondly, e.g. dip counting method, pipe counting technique, and plane source counting technique. Thirdly, counting characteristics of a moving radioactive source was investigated. A small source was stuck on a moving belt and the center of a GM tube was faced to the belt. The counting rates with or without a collimator were analyzed using a simple equation. In part 2, the first experiment is on the flow rate of the Sorachi river in summer 1961. Measurements by an underwater detector and from periodically collected samples were compared at every observing stations. The second experiment was on the sorption loss of the isotopes in the river in 1963. Very little sorption loss was recognized with 82 Br, while a sorption loss of 10% was found with 24 Na after 6 km downflow. Isotopes were found to mix transversely after 7 to 10 km flow. The third experiment is concerned with the investigation on the movement of sediments at Okuma coast in Fukushima prefecture. (J.P.N.)

  20. Benchmark problems for repository siting models

    International Nuclear Information System (INIS)

    Ross, B.; Mercer, J.W.; Thomas, S.D.; Lester, B.H.

    1982-12-01

    This report describes benchmark problems to test computer codes used in siting nuclear waste repositories. Analytical solutions, field problems, and hypothetical problems are included. Problems are included for the following types of codes: ground-water flow in saturated porous media, heat transport in saturated media, ground-water flow in saturated fractured media, heat and solute transport in saturated porous media, solute transport in saturated porous media, solute transport in saturated fractured media, and solute transport in unsaturated porous media

  1. Some free boundary problems in potential flow regime usinga based level set method

    Energy Technology Data Exchange (ETDEWEB)

    Garzon, M.; Bobillo-Ares, N.; Sethian, J.A.

    2008-12-09

    Recent advances in the field of fluid mechanics with moving fronts are linked to the use of Level Set Methods, a versatile mathematical technique to follow free boundaries which undergo topological changes. A challenging class of problems in this context are those related to the solution of a partial differential equation posed on a moving domain, in which the boundary condition for the PDE solver has to be obtained from a partial differential equation defined on the front. This is the case of potential flow models with moving boundaries. Moreover the fluid front will possibly be carrying some material substance which will diffuse in the front and be advected by the front velocity, as for example the use of surfactants to lower surface tension. We present a Level Set based methodology to embed this partial differential equations defined on the front in a complete Eulerian framework, fully avoiding the tracking of fluid particles and its known limitations. To show the advantages of this approach in the field of Fluid Mechanics we present in this work one particular application: the numerical approximation of a potential flow model to simulate the evolution and breaking of a solitary wave propagating over a slopping bottom and compare the level set based algorithm with previous front tracking models.

  2. Simulation of Thermal Flow Problems via a Hybrid Immersed Boundary-Lattice Boltzmann Method

    Directory of Open Access Journals (Sweden)

    J. Wu

    2012-01-01

    Full Text Available A hybrid immersed boundary-lattice Boltzmann method (IB-LBM is presented in this work to simulate the thermal flow problems. In current approach, the flow field is resolved by using our recently developed boundary condition-enforced IB-LBM (Wu and Shu, (2009. The nonslip boundary condition on the solid boundary is enforced in simulation. At the same time, to capture the temperature development, the conventional energy equation is resolved. To model the effect of immersed boundary on temperature field, the heat source term is introduced. Different from previous studies, the heat source term is set as unknown rather than predetermined. Inspired by the idea in (Wu and Shu, (2009, the unknown is calculated in such a way that the temperature at the boundary interpolated from the corrected temperature field accurately satisfies the thermal boundary condition. In addition, based on the resolved temperature correction, an efficient way to compute the local and average Nusselt numbers is also proposed in this work. As compared with traditional implementation, no approximation for temperature gradients is required. To validate the present method, the numerical simulations of forced convection are carried out. The obtained results show good agreement with data in the literature.

  3. In-well time-of-travel approach to evaluate optimal purge duration during low-flow sampling of monitoring wells

    Science.gov (United States)

    Harte, Philip T.

    2017-01-01

    A common assumption with groundwater sampling is that low (time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.

  4. Estimation of water flow velocity in small plants using cold neutron imaging with D 2O tracer

    Science.gov (United States)

    Matsushima, U.; Herppich, W. B.; Kardjilov, N.; Graf, W.; Hilger, A.; Manke, I.

    2009-06-01

    Water flow imaging may help to better understand various problems related to water stress of plants. It may help to fully understand the water relations of plants. The objective of this research was to estimate the velocity of water flow in plant samples. Cut roses ( Rosa hybrida, var. 'Milva') were used as samples. Cold neutron radiography (CNR) was conducted at CONRAD, Helmholtz Center Berlin for Materials and Energy, Berlin, Germany. D 2O and H 2O were interchangeably injected into the water feeding system of the sample. After the uptake of D 2O, the neutron transmission increased due to the smaller attenuation coefficient of D 2O compared to H 2O. Replacement of D 2O in the rose peduncle was clearly observed. Three different optical flow algorithms, Block Matching, Horn-Schunck and Lucas-Kanade, were used to calculate the vector of D 2O tracer flow. The quality of sequential images providing sufficient spatial and temporal resolution allowed to estimate flow vector.

  5. Practical flow cytometry

    National Research Council Canada - National Science Library

    Shapiro, Howard M

    2003-01-01

    ... ... Conflict: Resolution ... 1.3 Problem Number One: Finding The Cell(s) ... Flow Cytometry: Quick on the Trigger ... The Main Event ... The Pulse Quickens, the Plot Thickens ... 1.4 Flow Cytometry: ...

  6. Optimization of sampling parameters for standardized exhaled breath sampling.

    Science.gov (United States)

    Doran, Sophie; Romano, Andrea; Hanna, George B

    2017-09-05

    The lack of standardization of breath sampling is a major contributing factor to the poor repeatability of results and hence represents a barrier to the adoption of breath tests in clinical practice. On-line and bag breath sampling have advantages but do not suit multicentre clinical studies whereas storage and robust transport are essential for the conduct of wide-scale studies. Several devices have been developed to control sampling parameters and to concentrate volatile organic compounds (VOCs) onto thermal desorption (TD) tubes and subsequently transport those tubes for laboratory analysis. We conducted three experiments to investigate (i) the fraction of breath sampled (whole vs. lower expiratory exhaled breath); (ii) breath sample volume (125, 250, 500 and 1000ml) and (iii) breath sample flow rate (400, 200, 100 and 50 ml/min). The target VOCs were acetone and potential volatile biomarkers for oesophago-gastric cancer belonging to the aldehyde, fatty acids and phenol chemical classes. We also examined the collection execution time and the impact of environmental contamination. The experiments showed that the use of exhaled breath-sampling devices requires the selection of optimum sampling parameters. The increase in sample volume has improved the levels of VOCs detected. However, the influence of the fraction of exhaled breath and the flow rate depends on the target VOCs measured. The concentration of potential volatile biomarkers for oesophago-gastric cancer was not significantly different between the whole and lower airway exhaled breath. While the recovery of phenols and acetone from TD tubes was lower when breath sampling was performed at a higher flow rate, other VOCs were not affected. A dedicated 'clean air supply' overcomes the contamination from ambient air, but the breath collection device itself can be a source of contaminants. In clinical studies using VOCs to diagnose gastro-oesophageal cancer, the optimum parameters are 500mls sample volume

  7. Three-dimensional reconstruction of highly complex microscopic samples using scanning electron microscopy and optical flow estimation.

    Directory of Open Access Journals (Sweden)

    Ahmadreza Baghaie

    Full Text Available Scanning Electron Microscope (SEM as one of the major research and industrial equipment for imaging of micro-scale samples and surfaces has gained extensive attention from its emerge. However, the acquired micrographs still remain two-dimensional (2D. In the current work a novel and highly accurate approach is proposed to recover the hidden third-dimension by use of multi-view image acquisition of the microscopic samples combined with pre/post-processing steps including sparse feature-based stereo rectification, nonlocal-based optical flow estimation for dense matching and finally depth estimation. Employing the proposed approach, three-dimensional (3D reconstructions of highly complex microscopic samples were achieved to facilitate the interpretation of topology and geometry of surface/shape attributes of the samples. As a byproduct of the proposed approach, high-definition 3D printed models of the samples can be generated as a tangible means of physical understanding. Extensive comparisons with the state-of-the-art reveal the strength and superiority of the proposed method in uncovering the details of the highly complex microscopic samples.

  8. Towards an integrated petrophysical tool for multiphase flow properties of core samples

    Energy Technology Data Exchange (ETDEWEB)

    Lenormand, R. [Institut Francais du Petrole, Rueil Malmaison (France)

    1997-08-01

    This paper describes the first use of an Integrated Petrophysical Tool (IPT) on reservoir rock samples. The IPT simultaneously measures the following petrophysical properties: (1) Complete capillary pressure cycle: primary drainage, spontaneous and forced imbibitions, secondary drainage (the cycle leads to the wettability of the core by using the USBM index); End-points and parts of the relative permeability curves; Formation factor and resistivity index. The IPT is based on the steady-state injection of one fluid through the sample placed in a Hassler cell. The experiment leading to the whole Pc cycle on two reservoir sandstones consists of about 30 steps at various oil or water flow rates. It takes about four weeks and is operated at room conditions. Relative permeabilities are in line with standard steady-state measurements. Capillary pressures are in accordance with standard centrifuge measurements. There is no comparison for the resistivity index, but the results are in agreement with literature data. However, the accurate determination of saturation remains the main difficulty and some improvements are proposed. In conclusion, the Integrated Petrophysical Tool is as accurate as standard methods and has the advantage of providing the various parameters on the same sample and during a single experiment. The FIT is easy to use and can be automated. In addition, it can be operated in reservoir conditions.

  9. Flow induced dispersion analysis rapidly quantifies proteins in human plasma samples

    DEFF Research Database (Denmark)

    Poulsen, Nicklas N; Andersen, Nina Z; Østergaard, Jesper

    2015-01-01

    Rapid and sensitive quantification of protein based biomarkers and drugs is a substantial challenge in diagnostics and biopharmaceutical drug development. Current technologies, such as ELISA, are characterized by being slow (hours), requiring relatively large amounts of sample and being subject...... to cumbersome and expensive assay development. In this work a new approach for quantification based on changes in diffusivity is presented. The apparent diffusivity of an indicator molecule interacting with the protein of interest is determined by Taylor Dispersion Analysis (TDA) in a hydrodynamic flow system...... in a blood plasma matrix), fully automated, and being subject to a simple assay development. FIDA is demonstrated for quantification of the protein Human Serum Albumin (HSA) in human plasma as well as for quantification of an antibody against HSA. The sensitivity of the FIDA assay depends on the indicator...

  10. Inverse Problems in Geodynamics Using Machine Learning Algorithms

    Science.gov (United States)

    Shahnas, M. H.; Yuen, D. A.; Pysklywec, R. N.

    2018-01-01

    During the past few decades numerical studies have been widely employed to explore the style of circulation and mixing in the mantle of Earth and other planets. However, in geodynamical studies there are many properties from mineral physics, geochemistry, and petrology in these numerical models. Machine learning, as a computational statistic-related technique and a subfield of artificial intelligence, has rapidly emerged recently in many fields of sciences and engineering. We focus here on the application of supervised machine learning (SML) algorithms in predictions of mantle flow processes. Specifically, we emphasize on estimating mantle properties by employing machine learning techniques in solving an inverse problem. Using snapshots of numerical convection models as training samples, we enable machine learning models to determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at midmantle depths. Employing support vector machine algorithms, we show that SML techniques can successfully predict the magnitude of mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex geodynamic problems in mantle dynamics by employing deep learning algorithms for putting constraints on properties such as viscosity, elastic parameters, and the nature of thermal and chemical anomalies.

  11. Numberical Solution to Transient Heat Flow Problems

    Science.gov (United States)

    Kobiske, Ronald A.; Hock, Jeffrey L.

    1973-01-01

    Discusses the reduction of the one- and three-dimensional diffusion equation to the difference equation and its stability, convergence, and heat-flow applications under different boundary conditions. Indicates the usefulness of this presentation for beginning students of physics and engineering as well as college teachers. (CC)

  12. Systematic Sampling and Cluster Sampling of Packet Delays

    OpenAIRE

    Lindh, Thomas

    2006-01-01

    Based on experiences of a traffic flow performance meter this papersuggests and evaluates cluster sampling and systematic sampling as methods toestimate average packet delays. Systematic sampling facilitates for exampletime analysis, frequency analysis and jitter measurements. Cluster samplingwith repeated trains of periodically spaced sampling units separated by randomstarting periods, and systematic sampling are evaluated with respect to accuracyand precision. Packet delay traces have been ...

  13. Demonstration of robust micromachined jet technology and its application to realistic flow control problems

    International Nuclear Information System (INIS)

    Chang, Sung Pil

    2006-01-01

    This paper describes the demonstration of successful fabrication and initial characterization of micromachined pressure sensors and micromachined jets (microjets) fabricated for use in macro flow control and other applications. In this work, the microfabrication technology was investigated to create a micromachined fluidic control system with a goal of application in practical fluids problems, such as UAV (Unmanned Aerial Vehicle)-scale aerodynamic control. Approaches of this work include : (1) the development of suitable micromachined synthetic jets (microjets) as actuators, which obviate the need to physically extend micromachined structures into an external flow ; and (2) a non-silicon alternative micromachining fabrication technology based on metallic substrates and lamination (in addition to traditional MEMS technologies) which will allow the realization of larger scale, more robust structures and larger array active areas for fluidic systems. As an initial study, an array of MEMS pressure sensors and an array of MEMS modulators for orifice-based control of microjets have been fabricated, and characterized. Both pressure sensors and modulators have been built using stainless steel as a substrate and a combination of lamination and traditional micromachining processes as fabrication technologies

  14. Demonstration of robust micromachined jet technology and its application to realistic flow control problems

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Sung Pil [Inha University, Incheon (Korea, Republic of)

    2006-04-15

    This paper describes the demonstration of successful fabrication and initial characterization of micromachined pressure sensors and micromachined jets (microjets) fabricated for use in macro flow control and other applications. In this work, the microfabrication technology was investigated to create a micromachined fluidic control system with a goal of application in practical fluids problems, such as UAV (Unmanned Aerial Vehicle)-scale aerodynamic control. Approaches of this work include : (1) the development of suitable micromachined synthetic jets (microjets) as actuators, which obviate the need to physically extend micromachined structures into an external flow ; and (2) a non-silicon alternative micromachining fabrication technology based on metallic substrates and lamination (in addition to traditional MEMS technologies) which will allow the realization of larger scale, more robust structures and larger array active areas for fluidic systems. As an initial study, an array of MEMS pressure sensors and an array of MEMS modulators for orifice-based control of microjets have been fabricated, and characterized. Both pressure sensors and modulators have been built using stainless steel as a substrate and a combination of lamination and traditional micromachining processes as fabrication technologies.

  15. Flow rate and source reservoir identification from airborne chemical sampling of the uncontrolled Elgin platform gas release

    Science.gov (United States)

    Lee, James D.; Mobbs, Stephen D.; Wellpott, Axel; Allen, Grant; Bauguitte, Stephane J.-B.; Burton, Ralph R.; Camilli, Richard; Coe, Hugh; Fisher, Rebecca E.; France, James L.; Gallagher, Martin; Hopkins, James R.; Lanoiselle, Mathias; Lewis, Alastair C.; Lowry, David; Nisbet, Euan G.; Purvis, Ruth M.; O'Shea, Sebastian; Pyle, John A.; Ryerson, Thomas B.

    2018-03-01

    An uncontrolled gas leak from 25 March to 16 May 2012 led to evacuation of the Total Elgin wellhead and neighbouring drilling and production platforms in the UK North Sea. Initially the atmospheric flow rate of leaking gas and condensate was very poorly known, hampering environmental assessment and well control efforts. Six flights by the UK FAAM chemically instrumented BAe-146 research aircraft were used to quantify the flow rate. The flow rate was calculated by assuming the plume may be modelled by a Gaussian distribution with two different solution methods: Gaussian fitting in the vertical and fitting with a fully mixed layer. When both solution methods were used they compared within 6 % of each other, which was within combined errors. Data from the first flight on 30 March 2012 showed the flow rate to be 1.3 ± 0.2 kg CH4 s-1, decreasing to less than half that by the second flight on 17 April 2012. δ13CCH4 in the gas was found to be -43 ‰, implying that the gas source was unlikely to be from the main high pressure, high temperature Elgin gas field at 5.5 km depth, but more probably from the overlying Hod Formation at 4.2 km depth. This was deemed to be smaller and more manageable than the high pressure Elgin field and hence the response strategy was considerably simpler. The first flight was conducted within 5 days of the blowout and allowed a flow rate estimate within 48 h of sampling, with δ13CCH4 characterization soon thereafter, demonstrating the potential for a rapid-response capability that is widely applicable to future atmospheric emissions of environmental concern. Knowledge of the Elgin flow rate helped inform subsequent decision making. This study shows that leak assessment using appropriately designed airborne plume sampling strategies is well suited for circumstances where direct access is difficult or potentially dangerous. Measurements such as this also permit unbiased regulatory assessment of potential impact, independent of the emitting

  16. Critical slowing down and the gradient flow coupling in the Schroedinger functional

    Energy Technology Data Exchange (ETDEWEB)

    Fritzsch, Patrick; Stollenwerk, Felix [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Ramos, Alberto [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC

    2013-11-15

    We study the sensitivity of the gradient flow coupling to sectors of different topological charge and its implications in practical situations. Furthermore, we investigate an alternative definition of the running coupling that is expected to be less sensitive to the problems of the HMC algorithm to efficiently sample all topological sectors.

  17. Critical slowing down and the gradient flow coupling in the Schroedinger functional

    International Nuclear Information System (INIS)

    Fritzsch, Patrick; Stollenwerk, Felix; Ramos, Alberto

    2013-11-01

    We study the sensitivity of the gradient flow coupling to sectors of different topological charge and its implications in practical situations. Furthermore, we investigate an alternative definition of the running coupling that is expected to be less sensitive to the problems of the HMC algorithm to efficiently sample all topological sectors.

  18. Frequency-Modulated Continuous Flow Analysis Electrospray Ionization Mass Spectrometry (FM-CFA-ESI-MS) for Sample Multiplexing.

    Science.gov (United States)

    Filla, Robert T; Schrell, Adrian M; Coulton, John B; Edwards, James L; Roper, Michael G

    2018-02-20

    A method for multiplexed sample analysis by mass spectrometry without the need for chemical tagging is presented. In this new method, each sample is pulsed at unique frequencies, mixed, and delivered to the mass spectrometer while maintaining a constant total flow rate. Reconstructed ion currents are then a time-dependent signal consisting of the sum of the ion currents from the various samples. Spectral deconvolution of each reconstructed ion current reveals the identity of each sample, encoded by its unique frequency, and its concentration encoded by the peak height in the frequency domain. This technique is different from other approaches that have been described, which have used modulation techniques to increase the signal-to-noise ratio of a single sample. As proof of concept of this new method, two samples containing up to 9 analytes were multiplexed. The linear dynamic range of the calibration curve was increased with extended acquisition times of the experiment and longer oscillation periods of the samples. Because of the combination of the samples, salt had little effect on the ability of this method to achieve relative quantitation. Continued development of this method is expected to allow for increased numbers of samples that can be multiplexed.

  19. The association of ADHD and depression: Mediation by peer problems and parent-child difficulties in two complementary samples

    Science.gov (United States)

    Humphreys, Kathryn L.; Katz, Shaina J.; Lee, Steve S.; Hammen, Constance L.; Brennan, Patricia A.; Najman, Jake M.

    2013-01-01

    Children with attention-deficit/hyperactivity disorder (ADHD) are at increased risk for the development of depression, with evidence that peer and academic difficulties mediate predictions of later depression from ADHD. The present study hypothesized that parent-child relationship difficulties may be an additional potential mediator of this association. Academic, peer, and parent-child functioning were tested as mediators of the association of attention problems and depression in two distinctly different, yet complementary samples. Study 1 was a cross-sectional sample of 230 5–10 year-old children with and without ADHD. Study 2 was a prospective longitudinal sample of 472 youth followed prospectively from birth to age 20 at risk for depression. Despite differences in age, measures, and designs, both studies implicated peer and parent-child problems as unique mediators of depressive symptoms, although academic difficulties did not uniquely mediate the ADHD-depression association. Further, inattention symptoms, but not hyperactivity, predicted depressive symptoms via the disruption of interpersonal functioning. The inclusion of oppositional defiant disorder into models impacted results, and supported its independent role in parent-child problems. Implications include support for interventions that target interpersonal competence, which may effectively reduce the risk of depression among children with ADHD. PMID:24016021

  20. Eating Problems and Their Risk Factors: A 7-Year Longitudinal Study of a Population Sample of Norwegian Adolescent Girls

    Science.gov (United States)

    Kansi, Juliska; Wichstrom, Lars; Bergman, Lars R.

    2005-01-01

    The longitudinal stability of eating problems and their relationships to risk factors were investigated in a representative population sample of 623 Norwegian girls aged 13-14 followed over 7 years (3 time points). Three eating problem symptoms were measured: Restriction, Bulimia-food preoccupation, and Diet, all taken from the 12-item Eating…

  1. Associations between labial and whole salivary flow rates, systemic diseases and medications in a sample of older people

    DEFF Research Database (Denmark)

    Smidt, Dorte; Torpet, Lis Andersen; Nauntofte, Birgitte

    2010-01-01

    Smidt D, Torpet LA, Nauntofte B, Heegaard KM, Pedersen AML. Associations between labial and whole salivary flow rates, systemic diseases and medications in a sample of older people. Community Dent Oral Epidemiol 2010; 38: 422-435. © 2010 John Wiley & Sons A/S Abstract - Objective: To investigate...... the associations between age, gender, systemic diseases, medications and labial and whole salivary flow rates in older people. Methods: Unstimulated labial (LS) and unstimulated (UWS) and chewing-stimulated (SWS) whole salivary flow rates were measured in 389 randomly selected community-dwelling Danish women...... and 279 men aged 65-97 years. Systemic diseases, medications (coded according to the Anatomical Therapeutic Chemical (ATC) Classification System), tobacco and alcohol consumption were registered. Results: The number of diseases and medications was higher and UWS lower in the older age groups. On average...

  2. Numerical modelling of two phase flow with hysteresis in heterogeneous porous media

    Energy Technology Data Exchange (ETDEWEB)

    Abreu, E. [Instituto Nacional de Matematica Pura e Aplicada (IMPA), Rio de Janeiro, RJ (Brazil); Furtado, F.; Pereira, F. [University of Wyoming, Laramie, WY (United States). Dept. of Mathematicsatics; Souza, G. [Universidade do Estado do Rio de Janeiro (UERJ), RJ (Brazil)

    2008-07-01

    Numerical simulators are necessary for the understanding of multiphase flow in porous media in order to optimize hydrocarbon recovery. In this work, the immiscible flow of two incompressible phases, a problem very common in waterflooding of petroleum reservoirs, is considered and numerical simulation techniques are presented. The system of equations which describe this type of flow form a coupled, highly nonlinear system of time-dependent partial differential equations (PDEs). The equation for the saturation of the invading fluid is a convection-dominated, degenerate parabolic PDE whose solutions typically exhibit sharp fronts (i.e., internal layers with strong gradients) and is very difficult to approximate numerically. It is well known that accurate modeling of convective and diffusive processes is one of the most daunting tasks in the numerical approximation of PDEs. Particularly difficult is the case where convection dominates diffusion. Specifically, we consider the injection problem for a model of two-phase (water/oil) flow in a core sample of porous rock, taking into account hysteresis effects in the relative permeability of the oil phase. (author)

  3. A novel automatic flow method with direct-injection photometric detector for determination of dissolved reactive phosphorus in wastewater and freshwater samples.

    Science.gov (United States)

    Koronkiewicz, Stanislawa; Trifescu, Mihaela; Smoczynski, Lech; Ratnaweera, Harsha; Kalinowski, Slawomir

    2018-02-12

    The novel automatic flow system, direct-injection detector (DID) integrated with multi-pumping flow system (MPFS), dedicated for the photometric determination of orthophosphates in wastewater and freshwater samples is for the first time described. All reagents and the sample were injected simultaneously, in counter-current into the reaction-detection chamber by the system of specially selected for this purpose solenoid micro-pumps. The micro-pumps provided good precision and accuracy of the injected volumes. For the determination of orthophosphates, the molybdenum blue method was employed. The developed method can be used to detect orthophosphate in the range 0.1-12 mg L -1 , with the repeatability (RSD) about 2.2% at 4 mg L -1 and a very high injection throughput of 120 injections h -1 . It was possible to achieve a very small consumption of reagents (10 μL of ammonium molybdate and 10 μL of ascorbic acid) and sample (20 μL). The volume of generated waste was only 440 μL per analysis. The method has been successfully applied, giving a good accuracy, to determination of orthophosphates in complex matrix samples: treated wastewater, lake water and reference sample of groundwater. The developed system is compact, small in both size and weight, requires 12 V in supply voltage, which are desirable for truly portable equipment used in routine analysis. The simplicity of the system should result in its greater long-time reliability comparing to other flow methods previously described.

  4. Pressure-driven one-step solid phase-based on-chip sample preparation on a microfabricated plastic device and integration with flow-through polymerase chain reaction (PCR).

    Science.gov (United States)

    Tran, Hong Hanh; Trinh, Kieu The Loan; Lee, Nae Yoon

    2013-10-01

    In this study, we fabricate a monolithic poly(methylmethacrylate) (PMMA) microdevice on which solid phase-based DNA preparation and flow-through polymerase chain reaction (PCR) units were functionally integrated for one-step sample preparation and amplification operated by pressure. Chelex resin, which is used as a solid support for DNA preparation, can capture denatured proteins but releases DNA, and the purified DNA can then be used as a template in a subsequent amplification process. Using the PMMA microdevices, DNA was successfully purified from both Escherichia coli and human hair sample, and the plasmid vector inserted in E. coli and the D1S80 locus in human genomic DNA were successfully amplified from on-chip purified E. coli and human hair samples. Furthermore, the integration potential of the proposed sample preparation and flow-through PCR units was successfully demonstrate on a monolithic PMMA microdevice with a seamless flow, which could pave the way for a pressure-driven, simple one-step sample preparation and amplification with greatly decreased manufacture cost and enhanced device disposability. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Flow in data racks

    Directory of Open Access Journals (Sweden)

    Manoch Lukáš

    2014-03-01

    Full Text Available This paper deals with the flow in data racks. The aim of this work is to find a new arrangement of elements regulating the flow in the data rack so that the aerodynamic losses and the recirculation zones were minimized. The main reason for solving this problem is to reduce the costs of data racks cooling. Another problem to be solved is a reverse flow in the servers, thus not cooled, occuring due to the underpressure in the recirculation zones. In order to solve the problem, the experimental and numerical model of 27U data rack fitted with 10 pieces of server models with a total input of 10 kW was created. Different configurations of layout of elements affecting the flow in the inlet area of the data rack were compared. Depending on the results achieved, design solutions for the improvement of existing solutions were adopted and verified by numerical simulations.

  6. Graphics for the multivariate two-sample problem

    International Nuclear Information System (INIS)

    Friedman, J.H.; Rafsky, L.C.

    1981-01-01

    Some graphical methods for comparing multivariate samples are presented. These methods are based on minimal spanning tree techniques developed for multivariate two-sample tests. The utility of these methods is illustrated through examples using both real and artificial data

  7. An integrated approach to combating flow assurance problems

    Energy Technology Data Exchange (ETDEWEB)

    Abney, Laurence; Browne, Alan [Halliburton, Houston, TX (United States)

    2005-07-01

    Any upset to the internal pipe surface of a pipeline can significantly impact both pipeline through-put and energy requirements for maintaining design flow rates. Inefficient flow through pipelines can have a significant negative impact on operating expense (Opex) and the energy requirements necessary to maintain pipeline through-put. Effective flow maintenance helps ensure that Opex remains within budget, processing equipment life is extended and that excessive use of energy is minimized. A number of events can result in debris generation and deposition in a pipeline. Corrosion, hydrate formation, paraffin deposition, asphaltene deposition, development of 'black powder' and scale formation are the most common sources of pipeline debris. Generally, a combination of pigging and chemical treatments is used to remove debris; these two techniques are commonly used in isolation. Incorporation of specialized fluids with enhanced solid-transport capabilities, specialized dispersants, or specialized surfactants can improve the success of routine pigging operations. An array of alternative and often complementary remediation technologies can be used to effect the removal of deposits or even full restrictions from pipelines. These include the application of acids, specialized chemical products, and intrusive interventions techniques. This paper presents a review of methods of integrating existing technologies. (author)

  8. Method and software to solution of inverse and inverse design fluid flow and heat transfer problems is compatible with CFD-software

    Energy Technology Data Exchange (ETDEWEB)

    Krukovsky, P G [Institute of Engineering Thermophysics, National Academy of Sciences of Ukraine, Kiev (Ukraine)

    1998-12-31

    The description of method and software FRIEND which provide a possibility of solution of inverse and inverse design problems on the basis of existing (base) CFD-software for solution of direct problems (in particular, heat-transfer and fluid-flow problems using software PHOENICS) are presented. FRIEND is an independent additional module that widens the operational capacities of the base software unified with this module. This unifying does not require any change or addition to the base software. Interfacing of FRIEND and the base software takes place through input and output files of the base software. A brief description of the computational technique applied for the inverse problem solution, same detailed information on the interfacing of FRIEND and CFD-software and solution results for testing inverse and inverse design problems, obtained using the tandem CFD-software PHOENICS and FRIEND, are presented. (author) 9 refs.

  9. Method and software to solution of inverse and inverse design fluid flow and heat transfer problems is compatible with CFD-software

    Energy Technology Data Exchange (ETDEWEB)

    Krukovsky, P.G. [Institute of Engineering Thermophysics, National Academy of Sciences of Ukraine, Kiev (Ukraine)

    1997-12-31

    The description of method and software FRIEND which provide a possibility of solution of inverse and inverse design problems on the basis of existing (base) CFD-software for solution of direct problems (in particular, heat-transfer and fluid-flow problems using software PHOENICS) are presented. FRIEND is an independent additional module that widens the operational capacities of the base software unified with this module. This unifying does not require any change or addition to the base software. Interfacing of FRIEND and the base software takes place through input and output files of the base software. A brief description of the computational technique applied for the inverse problem solution, same detailed information on the interfacing of FRIEND and CFD-software and solution results for testing inverse and inverse design problems, obtained using the tandem CFD-software PHOENICS and FRIEND, are presented. (author) 9 refs.

  10. Numerical simulation of real-world flows

    Energy Technology Data Exchange (ETDEWEB)

    Hayase, Toshiyuki, E-mail: hayase@ifs.tohoku.ac.jp [Institute of Fluid Science, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai, 980-8577 (Japan)

    2015-10-15

    Obtaining real flow information is important in various fields, but is a difficult issue because measurement data are usually limited in time and space, and computational results usually do not represent the exact state of real flows. Problems inherent in the realization of numerical simulation of real-world flows include the difficulty in representing exact initial and boundary conditions and the difficulty in representing unstable flow characteristics. This article reviews studies dealing with these problems. First, an overview of basic flow measurement methodologies and measurement data interpolation/approximation techniques is presented. Then, studies on methods of integrating numerical simulation and measurement, namely, four-dimensional variational data assimilation (4D-Var), Kalman filters (KFs), state observers, etc are discussed. The first problem is properly solved by these integration methodologies. The second problem can be partially solved with 4D-Var in which only initial and boundary conditions are control parameters. If an appropriate control parameter capable of modifying the dynamical structure of the model is included in the formulation of 4D-Var, unstable modes are properly suppressed and the second problem is solved. The state observer and KFs also solve the second problem by modifying mathematical models to stabilize the unstable modes of the original dynamical system by applying feedback signals. These integration methodologies are now applied in simulation of real-world flows in a wide variety of research fields. Examples are presented for basic fluid dynamics and applications in meteorology, aerospace, medicine, etc. (topical review)

  11. Evaluation of permeability and non-Darcy flow in vuggy macroporous limestone aquifer samples with lattice Boltzmann methods

    Science.gov (United States)

    Sukop, Michael C.; Huang, Haibo; Alvarez, Pedro F.; Variano, Evan A.; Cunningham, Kevin J.

    2013-01-01

    Lattice Boltzmann flow simulations provide a physics-based means of estimating intrinsic permeability from pore structure and accounting for inertial flow that leads to departures from Darcy's law. Simulations were used to compute intrinsic permeability where standard measurement methods may fail and to provide better understanding of departures from Darcy's law under field conditions. Simulations also investigated resolution issues. Computed tomography (CT) images were acquired at 0.8 mm interscan spacing for seven samples characterized by centimeter-scale biogenic vuggy macroporosity from the extremely transmissive sole-source carbonate karst Biscayne aquifer in southeastern Florida. Samples were as large as 0.3 m in length; 7–9 cm-scale-length subsamples were used for lattice Boltzmann computations. Macroporosity of the subsamples was as high as 81%. Matrix porosity was ignored in the simulations. Non-Darcy behavior led to a twofold reduction in apparent hydraulic conductivity as an applied hydraulic gradient increased to levels observed at regional scale within the Biscayne aquifer; larger reductions are expected under higher gradients near wells and canals. Thus, inertial flows and departures from Darcy's law may occur under field conditions. Changes in apparent hydraulic conductivity with changes in head gradient computed with the lattice Boltzmann model closely fit the Darcy-Forchheimer equation allowing estimation of the Forchheimer parameter. CT-scan resolution appeared adequate to capture intrinsic permeability; however, departures from Darcy behavior were less detectable as resolution coarsened.

  12. Filtering Undesirable Flows in Networks

    NARCIS (Netherlands)

    Polevoy, G.; Trajanovski, S.; Grosso, P.; de Laat, C.; Gao, X.; Du, H.; Han, M.

    2017-01-01

    We study the problem of fully mitigating the effects of denial of service by filtering the minimum necessary set of the undesirable flows. First, we model this problem and then we concentrate on a subproblem where every good flow has a bottleneck. We prove that unless P=NP, this subproblem is

  13. Integration of continuous-flow sampling with microchip electrophoresis using poly(dimethylsiloxane)-based valves in a reversibly sealed device.

    Science.gov (United States)

    Li, Michelle W; Martin, R Scott

    2007-07-01

    Here we describe a reversibly sealed microchip device that incorporates poly(dimethylsiloxane) (PDMS)-based valves for the rapid injection of analytes from a continuously flowing stream into a channel network for analysis with microchip electrophoresis. The microchip was reversibly sealed to a PDMS-coated glass substrate and microbore tubing was used for the introduction of gas and fluids to the microchip device. Two pneumatic valves were incorporated into the design and actuated on the order of hundreds of milliseconds, allowing analyte from a continuously flowing sampling stream to be injected into an electrophoresis separation channel. The device was characterized in terms of the valve actuation time and pushback voltage. It was also found that the addition of sodium dodecyl sulfate (SDS) to the buffer system greatly increased the reproducibility of the injection scheme and enabled the analysis of amino acids derivatized with naphthalene-2,3-dicarboxaldehyde/cyanide. Results from continuous injections of a 0.39 nL fluorescein plug into the optimized system showed that the injection process was reproducible (RSD of 0.7%, n = 10). Studies also showed that the device was capable of monitoring off-chip changes in concentration with a device lag time of 90 s. Finally, the ability of the device to rapidly monitor on-chip concentration changes was demonstrated by continually sampling from an analyte plug that was derivatized upstream from the electrophoresis/continuous flow interface. A reversibly sealed device of this type will be useful for the continuous monitoring and analysis of processes that occur either off-chip (such as microdialysis sampling) or on-chip from other integrated functions.

  14. Gas/liquid flow configurations

    International Nuclear Information System (INIS)

    Bonin, Jacques; Fitremann, J.-M.

    1978-01-01

    Prediction of flow configurations (morphology) for gas/liquid or liquid/vapour mixtures is an important industrial problem which is not yet fully understood. The ''Flow Configurations'' Seminar of Societe Hydrotechnique de France has framed recommendations for investigation of potential industrial applications for flow configurations [fr

  15. Flow shop scheduling with heterogeneous workers

    OpenAIRE

    Benavides, Alexander J.; Ritt, Marcus; Miralles Insa, Cristóbal Javier

    2014-01-01

    We propose an extension to the flow shop scheduling problem named Heterogeneous Flow Shop Scheduling Problem (Het-FSSP), where two simultaneous issues have to be resolved: finding the best worker assignment to the workstations, and solving the corresponding scheduling problem. This problem is motivated by Sheltered Work centers for Disabled, whose main objective is the labor integration of persons with disabilities, an important aim not only for these centers but for any company d...

  16. Modelling of natural convection flows with large temperature differences: a benchmark problem for low Mach number solvers. Part. 1 reference solutions

    International Nuclear Information System (INIS)

    Le Quere, P.; Weisman, C.; Paillere, H.; Vierendeels, J.; Dick, E.; Becker, R.; Braack, M.; Locke, J.

    2005-01-01

    Heat transfer by natural convection and conduction in enclosures occurs in numerous practical situations including the cooling of nuclear reactors. For large temperature difference, the flow becomes compressible with a strong coupling between the continuity, the momentum and the energy equations through the equation of state, and its properties (viscosity, heat conductivity) also vary with the temperature, making the Boussinesq flow approximation inappropriate and inaccurate. There are very few reference solutions in the literature on non-Boussinesq natural convection flows. We propose here a test case problem which extends the well-known De Vahl Davis differentially heated square cavity problem to the case of large temperature differences for which the Boussinesq approximation is no longer valid. The paper is split in two parts: in this first part, we propose as yet unpublished reference solutions for cases characterized by a non-dimensional temperature difference of 0.6, Ra 10 6 (constant property and variable property cases) and Ra = 10 7 (variable property case). These reference solutions were produced after a first international workshop organized by Cea and LIMSI in January 2000, in which the above authors volunteered to produce accurate numerical solutions from which the present reference solutions could be established. (authors)

  17. A finite-element model for moving contact line problems in immiscible two-phase flow

    Science.gov (United States)

    Kucala, Alec

    2017-11-01

    Accurate modeling of moving contact line (MCL) problems is imperative in predicting capillary pressure vs. saturation curves, permeability, and preferential flow paths for a variety of applications, including geological carbon storage (GCS) and enhanced oil recovery (EOR). The macroscale movement of the contact line is dependent on the molecular interactions occurring at the three-phase interface, however most MCL problems require resolution at the meso- and macro-scale. A phenomenological model must be developed to account for the microscale interactions, as resolving both the macro- and micro-scale would render most problems computationally intractable. Here, a model for the moving contact line is presented as a weak forcing term in the Navier-Stokes equation and applied directly at the location of the three-phase interface point. The moving interface is tracked with the level set method and discretized using the conformal decomposition finite element method (CDFEM), allowing for the surface tension and the wetting model to be computed at the exact interface location. A variety of verification test cases for simple two- and three-dimensional geometries are presented to validate the current MCL model, which can exhibit grid independence when a proper scaling for the slip length is chosen. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.

  18. Speciation of mercury in fish samples by flow injection catalytic cold vapour atomic absorption spectrometry

    International Nuclear Information System (INIS)

    Zhang Yanlin; Adeloju, Samuel B.

    2012-01-01

    Highlights: ► Successful speciation of inorganic and organic Hg with Fe 3+ , Cu 2+ and thiourea as catalysts. ► Best sensitivity enhancement and similar sensitivity for MeHg and Hg 2+ with Fe 3+ . ► Successful use of Hg 2+ as the primary standard for quantification of inorganic and total-Hg. ► Quantitative extraction of Hg and MeHg with 2 M HCl which contained thiourea. ► Integration with FIA for rapid analysis with a sample throughput of 180 h −1 . - Abstract: A rapid flow injection catalytic cold vapour atomic absorption spectrometric (FI-CCV-AAS) method is described for speciation and determination of mercury in biological samples. Varying concentrations of NaBH 4 were employed for mercury vapour generation from inorganic and mixture of inorganic and organic (total) Hg. The presence of Fe 3+ , Cu 2+ and thiourea had catalytic effect on mercury vapour generation from methylmercury (MeHg) and, when together, Cu 2+ and thiourea had synergistic catalytic effect on the vapour generation. Of the two metal ions, Fe 3+ gave the best sensitivity enhancement, achieving the same sensitivity for MeHg and inorganic Hg 2+ . Due to similarity of resulting sensitivity, Hg 2+ was used successfully as a primary standard for quantification of inorganic and total Hg. The catalysis was homogeneous in nature, and it was assumed that the breaking of the C-Hg bond was facilitated by the delocalization of the 5d electron pairs in Hg atom. The extraction of MeHg and inorganic mercury (In-Hg) in fish samples were achieved quantitatively with hydrochloric acid in the presence of thiourea and determined by FI-CCV-AAS. The application of the method to the quantification of mercury species in a fish liver reference material DOLT-4 gave 91.5% and 102.3% recoveries for total and methyl mercury, respectively. The use of flow injection enabled rapid analysis with a sample throughput of 180 h −1 .

  19. Industrial aspects of gas-liquid two-phase flow

    International Nuclear Information System (INIS)

    Hewitt, G.F.

    1977-01-01

    The lecture begins by reviewing the various types of plant in which two phase flow occurs. Specifically, boiling plant, condensing plant and pipelines are reviewed, and the various two phase flow problems occurring in them are described. Of course, many other kinds of chemical engineering plant involve two phase flow, but are somewhat outside the scope of this lecture. This would include distillation columns, vapor-liquid separators, absorption towers etc. Other areas of industrial two phase flow which have been omitted for space reasons from this lecture are those concerned with gas/solids, liquid/solid and liquid/liquid flows. There then follows a description of some of the two phase flow processes which are relevant in industrial equipment and where special problems occur. The topics chosen are as follows: (1) pressure drop; (2) horizontal tubes - separation effects non-uniformites in heat transfer coefficient, effect of bends on dryout; (3) multicomponent mixtures - effects in pool boiling, mass transfer effects in condensation and Marangoni effects; (4) flow distribution - manifold problems in single phase flow, separation effects at a single T-junction in two phase flow and distribution in manifolds in two phase flow; (5) instability - oscillatory instability, special forms of instability in cryogenic systems; (6) nucleate boiling - effect of variability of surface, unresolved problems in forced convective nucleate boiling; and (7) shell side flows - flow patterns, cross flow boiling, condensation in cross flow

  20. Flows in networks under fuzzy conditions

    CERN Document Server

    Bozhenyuk, Alexander Vitalievich; Kacprzyk, Janusz; Rozenberg, Igor Naymovich

    2017-01-01

    This book offers a comprehensive introduction to fuzzy methods for solving flow tasks in both transportation and networks. It analyzes the problems of minimum cost and maximum flow finding with fuzzy nonzero lower flow bounds, and describes solutions to minimum cost flow finding in a network with fuzzy arc capacities and transmission costs. After a concise introduction to flow theory and tasks, the book analyzes two important problems. The first is related to determining the maximum volume for cargo transportation in the presence of uncertain network parameters, such as environmental changes, measurement errors and repair work on the roads. These parameters are represented here as fuzzy triangular, trapezoidal numbers and intervals. The second problem concerns static and dynamic flow finding in networks under fuzzy conditions, and an effective method that takes into account the network’s transit parameters is presented here. All in all, the book provides readers with a practical reference guide to state-of-...

  1. Path planning in uncertain flow fields using ensemble method

    KAUST Repository

    Wang, Tong

    2016-08-20

    An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  2. Sample problem manual for benchmarking of cask analysis codes

    International Nuclear Information System (INIS)

    Glass, R.E.

    1988-02-01

    A series of problems have been defined to evaluate structural and thermal codes. These problems were designed to simulate the hypothetical accident conditions given in Title 10 of the Code of Federal Regulation, Part 71 (10CFR71) while retaining simple geometries. This produced a problem set that exercises the ability of the codes to model pertinent physical phenomena without requiring extensive use of computer resources. The solutions that are presented are consensus solutions based on computer analyses done by both national laboratories and industry in the United States, United Kingdom, France, Italy, Sweden, and Japan. The intent of this manual is to provide code users with a set of standard structural and thermal problems and solutions which can be used to evaluate individual codes. 19 refs., 19 figs., 14 tabs

  3. Experimental study on flow pattern transitions for inclined two-phase flow

    Energy Technology Data Exchange (ETDEWEB)

    Kwak, Nam Yee; Lee, Jae Young [Handong Univ., Pohang (Korea, Republic of); Kim, Man Woong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2007-07-01

    In this paper, experimental data on flow pattern transition of inclination angles from 0-90 are presented. A test section is constructed 2 mm long and I.D 1inch using transparent material. The test section is supported by aluminum frame that can be placed with any arbitrary inclined angles. The air-water two-phase flow is observed at room temperature and atmospheric condition using both high speed camera and void impedance meter. The signal is sampled with sampling rate 1kHz and is analyzed under fully-developed condition. Based on experimental data, flow pattern maps are made for various inclination angles. As increasing the inclination angels from 0 to 90, the flow pattern transitions on the plane jg-jf are changed, such as stratified flow to plug flow or slug flow or plug flow to bubbly flow. The transition lines between pattern regimes are moved or sometimes disappeared due to its inclined angle.

  4. Accurate treatment of material interface dynamics in the calculation of one-dimensional two-phase flows by the integral method of characteristics

    International Nuclear Information System (INIS)

    Shin, Y.W.; Wiedermann, A.H.

    1984-01-01

    Accurate numerical methods for treating the junction and boundary conditions needed in the transient two-phase flows of a piping network were published earlier by us; the same methods are used to formulate the treatment of the material interface as a moving boundary. The method formulated is used in a computer program to calculate sample problems designed to test the numerical methods as to their ability and the accuracy limits for calculation of the transient two-phase flows in the piping network downstream of a PWR pressurizer. Independent exact analytical solutions for the sample problems are used as the basis of a critical evaluation of the proposed numerical methods. The evaluation revealed that the proposed boundary scheme indeed generates very accurate numerical results. However, in some extreme flow conditions, numerical difficulties were experienced that eventually led to numerical instability. This paper discusses further a special technique to overcome the difficulty

  5. Reverse flow injection spectrophotometric determination of thiram and nabam fungicides in natural water samples

    International Nuclear Information System (INIS)

    Asghar, M.; Yaqoob, M.; Nabi, A.

    2014-01-01

    A reverse flow injection (rFI) spectrophotometric method is reported for determination of thiram and nabam fungicides in natural water samples. The method is based on the reduction of iron(III) in the presence of thiram/nabam in acidic medium at 60 degree C and formation of iron(II)-ferricyanide complex was measured at 790 nm. The limits of detection (3s blank) were 0.01 and 0.05 micro g mL1 for thiram and nabam respectively with a sample throughput of 60 h1. Calibration graphs were linear over the range of 0.02 - 8.0 micro g mL1 (R2 = 0.9999, n = 8) and 0.1 - 30 micro g mL1 (R2 = 0.9982, n = 10) for thiram and nabam with relative standard deviations (RSDs; n = 3) in the range of 0.8 - 1.6% respectively. Experimental parameters and potential interferences were examined. Thiram and nabam were determined in natural water samples using solid-phase extraction (SPE) procedure and recoveries were in the range of 93+-3 - 105+-2% and 87+-4 - 102+-3% respectively. The results obtained were not significantly different compared with a HPLC method. (author)

  6. Mathematical modeling of swirled flows in industrial applications

    Science.gov (United States)

    Dekterev, A. A.; Gavrilov, A. A.; Sentyabov, A. V.

    2018-03-01

    Swirled flows are widely used in technological devices. Swirling flows are characterized by a wide range of flow regimes. 3D mathematical modeling of flows is widely used in research and design. For correct mathematical modeling of such a flow, it is necessary to use turbulence models, which take into account important features of the flow. Based on the experience of computational modeling of a wide class of problems with swirling flows, recommendations on the use of turbulence models for calculating the applied problems are proposed.

  7. Separation of flow

    CERN Document Server

    Chang, Paul K

    2014-01-01

    Interdisciplinary and Advanced Topics in Science and Engineering, Volume 3: Separation of Flow presents the problem of the separation of fluid flow. This book provides information covering the fields of basic physical processes, analyses, and experiments concerning flow separation.Organized into 12 chapters, this volume begins with an overview of the flow separation on the body surface as discusses in various classical examples. This text then examines the analytical and experimental results of the laminar boundary layer of steady, two-dimensional flows in the subsonic speed range. Other chapt

  8. Does parent-child agreement vary based on presenting problems? Results from a UK clinical sample.

    Science.gov (United States)

    Cleridou, Kalia; Patalay, Praveetha; Martin, Peter

    2017-01-01

    Discrepancies are often found between child and parent reports of child psychopathology, nevertheless the role of the child's presenting difficulties in relation to these is underexplored. This study investigates whether parent-child agreement on the conduct and emotional scales of the Strengths and Difficulties Questionnaire (SDQ) varied as a result of certain child characteristics, including the child's presenting problems to clinical services, age and gender. The UK-based sample consisted of 16,754 clinical records of children aged 11-17, the majority of which were female (57%) and White (76%). The dataset was provided by the Child Outcomes Research Consortium , which collects outcome measures from child services across the UK. Clinicians reported the child's presenting difficulties, and parents and children completed the SDQ. Using correlation analysis, the main findings indicated that agreement varied as a result of the child's difficulties for reports of conduct problems, and this seemed to be related to the presence or absence of externalising difficulties in the child's presentation. This was not the case for reports of emotional difficulties. In addition, agreement was higher when reporting problems not consistent with the child's presentation; for instance, agreement on conduct problems was greater for children presenting with internalising problems. Lastly, the children's age and gender did not seem to have an impact on agreement. These findings demonstrate that certain child presenting difficulties, and in particular conduct problems, may be related to informant agreement and need to be considered in clinical practice and research. Trial Registration This study was observational and as such did not require trial registration.

  9. The use of wavelet transforms in the solution of two-phase flow problems

    International Nuclear Information System (INIS)

    Moridis, G.J.; Nikolaou, M.; You, Yong

    1994-10-01

    In this paper we present the use of wavelets to solve the nonlinear Partial Differential.Equation (PDE) of two-phase flow in one dimension. The wavelet transforms allow a drastically different approach in the discretization of space. In contrast to the traditional trigonometric basis functions, wavelets approximate a function not by cancellation but by placement of wavelets at appropriate locations. When an abrupt chance, such as a shock wave or a spike, occurs in a function, only local coefficients in a wavelet approximation will be affected. The unique feature of wavelets is their Multi-Resolution Analysis (MRA) property, which allows seamless investigational any spatial resolution. The use of wavelets is tested in the solution of the one-dimensional Buckley-Leverett problem against analytical solutions and solutions obtained from standard numerical models. Two classes of wavelet bases (Daubechies and Chui-Wang) and two methods (Galerkin and collocation) are investigated. We determine that the Chui-Wang, wavelets and a collocation method provide the optimum wavelet solution for this type of problem. Increasing the resolution level improves the accuracy of the solution, but the order of the basis function seems to be far less important. Our results indicate that wavelet transforms are an effective and accurate method which does not suffer from oscillations or numerical smearing in the presence of steep fronts

  10. Gambling Type, Substance Abuse, Health and Psychosocial Correlates of Male and Female Problem Gamblers in a Nationally Representative French Sample.

    Science.gov (United States)

    Bonnaire, C; Kovess-Masfety, V; Guignard, R; Richard, J B; du Roscoät, E; Beck, F

    2017-06-01

    Many studies carried out on treatment-seeking problem gamblers (PG) have reported high levels of comorbid substance use disorders, and mental and physical health problems. Nevertheless, general population studies are still sparse, most of them have been carried out in the United States or Canada, and gender differences have not always been considered. Thus, the aim of this study was to describe the type of games, and psychological and physical correlates in male and female PG in a nationally representative French sample. The total sample studied involved 25,647 subjects aged 15-85 years, including 333 PG and 25,314 non-problem gamblers (NPG). Data were extracted from a large survey of a representative sample of the French general population. They were evaluated for sociodemographic variables, gambling behavior, type of gambling activity, substance use, psychological distress, body mass index, chronic disease, and lack of sleep. Overall, there were significant differences between PG and NPG in gender, age, education, employment and marital status, substance use disorders (alcohol, tobacco, cannabis, cocaine and heroin), psychological distress, obesity, lack of sleep and type of gambling activity. Although male and female PG had different profiles, the gambling type, especially strategic games, appeared as an important variable in the relationship between gender and problem gambling. This research underlines the importance of considering gender differences and gambling type in the study of gambling disorders. Identifying specific factors in the relationship between gender, gambling type and gambling problems may help improve clinical interventions and health promotion strategies.

  11. ACFAC: a cash flow analysis code for estimating product price from an industrial operation

    International Nuclear Information System (INIS)

    Delene, J.G.

    1980-04-01

    A computer code is presented which uses a discountted cash flow methodology to obtain an average product price for an industtrial process. The general discounted cash flow method is discussed. Special code options include multiple treatments of interest during construction and other preoperational costs, investment tax credits, and different methods for tax depreciation of capital assets. Two options for allocating the cost of plant decommissioning are available. The FORTRAN code listing and the computer output for a sample problem are included

  12. Flow modelling of plant processes for fault diagnosis

    International Nuclear Information System (INIS)

    Praetorius, N.; Duncan, K.D.

    1989-01-01

    Flow and its interruption or degradation is seen by many people in industry to be the essential problem in fault diagnonsis. It is this observation which has motivated the representation of a complex simulation of a process plant presented here. The display system we have developed represents the mass and energy flow functions of the plant and the relationship between such flow functions. In this report we shall mainly discuss how such representation seems to provide opportunities to design alarm systems as an integral part of the flow function representation itself and to solve two of the most intricate problems in diagnosis, namely the problem of symptom referral and the problem of confuseable faults. (author)

  13. Path optimization method for the sign problem

    Directory of Open Access Journals (Sweden)

    Ohnishi Akira

    2018-01-01

    Full Text Available We propose a path optimization method (POM to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t(f ϵ R and by optimizing f(t to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.

  14. Modeling heterogeneous unsaturated porous media flow at Yucca Mountain

    International Nuclear Information System (INIS)

    Robey, T.H.

    1994-01-01

    Geologic systems are inherently heterogeneous and this heterogeneity can have a significant impact on unsaturated flow through porous media. Most previous efforts to model groundwater flow through Yucca Mountain have used stratigraphic units with homogeneous properties. However, modeling heterogeneous porous and fractured tuff in a more realistic manner requires numerical methods for generating heterogeneous simulations of the media, scaling of material properties from core scale to computational scale, and flow modeling that allows channeling. The Yucca Mountain test case of the INTRAVAL project is used to test the numerical approaches. Geostatistics is used to generate more realistic representations of the stratigraphic units and heterogeneity within units is generated using sampling from property distributions. Scaling problems are reduced using an adaptive grid that minimizes heterogeneity within each flow element. A flow code based on the dual mixed-finite-element method that allows for heterogeneity and channeling is employed. In the Yucca Mountain test case, the simulated volumetric water contents matched the measured values at drill hole USW UZ-16 except in the nonwelded portion of Prow Pass

  15. ethod of straight lines for a Bingham problem as a model for the flow of waxy crude oils

    Directory of Open Access Journals (Sweden)

    German Ariel Torres

    2005-11-01

    Full Text Available In this work, we develop a method of straight lines for solving a Bingham problem that models the flow of waxy crude oils. The model describes the flow of mineral oils with a high content of paraffin at temperatures below the cloud point (i.e. the crystallization temperature of paraffin and more specifically below the pour point at which the crystals aggregate themselves and the oil takes a jell-like structure. From the rheological point of view such a system can be modelled as a Bingham fluid whose parameters evolve according to the volume fractions of crystallized paraffin and the aggregation degree of crystals. We prove that the method is well defined for all times, a monotone property, qualitative behaviour of the solution, and a convergence theorem. The results are compared with numerical experiments at the end of this article.

  16. A non-traditional fluid problem: transition between theoretical models from Stokes’ to turbulent flow

    Science.gov (United States)

    Salomone, Horacio D.; Olivieri, Néstor A.; Véliz, Maximiliano E.; Raviola, Lisandro A.

    2018-05-01

    In the context of fluid mechanics courses, it is customary to consider the problem of a sphere falling under the action of gravity inside a viscous fluid. Under suitable assumptions, this phenomenon can be modelled using Stokes’ law and is routinely reproduced in teaching laboratories to determine terminal velocities and fluid viscosities. In many cases, however, the measured physical quantities show important deviations with respect to the predictions deduced from the simple Stokes’ model, and the causes of these apparent ‘anomalies’ (for example, whether the flow is laminar or turbulent) are seldom discussed in the classroom. On the other hand, there are various variable-mass problems that students tackle during elementary mechanics courses and which are discussed in many textbooks. In this work, we combine both kinds of problems and analyse—both theoretically and experimentally—the evolution of a system composed of a sphere pulled by a chain of variable length inside a tube filled with water. We investigate the effects of different forces acting on the system such as weight, buoyancy, viscous friction and drag force. By means of a sequence of mathematical models of increasing complexity, we obtain a progressive fit that accounts for the experimental data. The contrast between the various models exposes the strengths and weaknessess of each one. The proposed experience can be useful for integrating concepts of elementary mechanics and fluids, and is suitable as laboratory practice, stressing the importance of the experimental validation of theoretical models and showing the model-building processes in a didactic framework.

  17. The link between ethnicity, social disadvantage and mental health problems in a school-based multiethnic sample of children in the Netherlands

    NARCIS (Netherlands)

    Adriaanse, M.; Veling, W.; Doreleijers, T.A.H.; van Domburgh, L.

    2014-01-01

    To investigate to what extent differences in prevalence and types of mental health problems between ethnic minority and majority youth can be explained by social disadvantage. Mental health problems were assessed in a sample of 1,278 schoolchildren (55 % Dutch, 32 % Moroccan and 13 % Turkish; mean

  18. The link between ethnicity, social disadvantage and mental health problems in a school-based multiethnic sample of children in the Netherlands

    NARCIS (Netherlands)

    Adriaanse, Marcia; Veling, Wim; Doreleijers, Theo; van Domburgh, Lieke

    To investigate to what extent differences in prevalence and types of mental health problems between ethnic minority and majority youth can be explained by social disadvantage. Mental health problems were assessed in a sample of 1,278 schoolchildren (55 % Dutch, 32 % Moroccan and 13 % Turkish; mean

  19. ARBITRARY INTERACTION OF PLANE SUPERSONIC FLOWS

    Directory of Open Access Journals (Sweden)

    P. V. Bulat

    2015-11-01

    Full Text Available Subject of study.We consider the Riemann problem for parameters at collision of two plane flows at a certain angle. The problem is solved in the exact statement. Most cases of interference, both stationary and non-stationary gas-dynamic discontinuities, followed by supersonic flows can be reduced to the problem of random interaction of two supersonic flows. Depending on the ratio of the parameters in the flows, outgoing discontinuities turn out to be shock waves, or rarefactionwaves. In some cases, there is no solution at all. It is important to know how to find the domain of existence for the relevant decisions, as the type of shock-wave structures in these domains is known in advance. The Riemann problem is used in numerical methods such as the method of Godunov. As a rule, approximate solution is used, known as the Osher solution, but for a number of problems with a high precision required, solution of this problem needs to be in the exact statement. Main results.Domains of existence for solutions with different types of shock-wave structure have been considered. Boundaries of existence for solutions with two outgoing shock waves are analytically defined, as well as with the outgoing shock wave and rarefaction wave. We identify the area of Mach numbers and angles at which the flows interact and there is no solution. Specific flows with two outgoing rarefaction waves are not considered. Practical significance. The results supplement interference theory of stationary gas-dynamic discontinuities and can be used to develop new methods of numerical calculation with extraction of discontinuities.

  20. On Howard's conjecture in heterogeneous shear flow problem

    Indian Academy of Sciences (India)

    M. Senthilkumar (Newgen Imaging) 1461 1996 Oct 15 13:05:22

    Department of Mathematics, H.P. University, Shimla 171 005, India. ∗. Sidharth Govt. Degree College, Nadaun, Dist. Hamirpur 177 033 ... in proving it in the case of the Garcia-type [3] flows wherein the basic velocity distribution has a point of ...

  1. Development of evaluation method on flow-induced vibration and corrosion of components in two-phase flow by coupled analysis. 1. Evaluation of effects of flow-induced vibration on structural material integrity

    International Nuclear Information System (INIS)

    Naitoh, Masanori; Uchida, Shunsuke; Koshizuka, Seiichi; Ninokata, Hisashi; Anahara, Naoki; Dosaki, Koji; Katono, Kenichi; Akiyama, Minoru; Saitoh, Hiroaki

    2007-01-01

    Problems in major components and structural materials in nuclear power plants have often been caused by flow induced vibration, corrosion and their overlapping effects. In order to establish safe and reliable plant operation, it is necessary to predict future problems for structural materials based on combined analyses of flow dynamics and corrosion and to mitigate them before they become serious issues for plant operation. An innovative method for flow induced vibration of structures in two phase flow by combined analyses of three dimensional flow dynamics and structures is to be introduced. (author)

  2. Numerical method for two-phase flow discontinuity propagation calculation

    International Nuclear Information System (INIS)

    Toumi, I.; Raymond, P.

    1989-01-01

    In this paper, we present a class of numerical shock-capturing schemes for hyperbolic systems of conservation laws modelling two-phase flow. First, we solve the Riemann problem for a two-phase flow with unequal velocities. Then, we construct two approximate Riemann solvers: an one intermediate-state Riemann solver and a generalized Roe's approximate Riemann solver. We give some numerical results for one-dimensional shock-tube problems and for a standard two-phase flow heat addition problem involving two-phase flow instabilities

  3. Parallel genetic algorithms with migration for the hybrid flow shop scheduling problem

    Directory of Open Access Journals (Sweden)

    K. Belkadi

    2006-01-01

    Full Text Available This paper addresses scheduling problems in hybrid flow shop-like systems with a migration parallel genetic algorithm (PGA_MIG. This parallel genetic algorithm model allows genetic diversity by the application of selection and reproduction mechanisms nearer to nature. The space structure of the population is modified by dividing it into disjoined subpopulations. From time to time, individuals are exchanged between the different subpopulations (migration. Influence of parameters and dedicated strategies are studied. These parameters are the number of independent subpopulations, the interconnection topology between subpopulations, the choice/replacement strategy of the migrant individuals, and the migration frequency. A comparison between the sequential and parallel version of genetic algorithm (GA is provided. This comparison relates to the quality of the solution and the execution time of the two versions. The efficiency of the parallel model highly depends on the parameters and especially on the migration frequency. In the same way this parallel model gives a significant improvement of computational time if it is implemented on a parallel architecture which offers an acceptable number of processors (as many processors as subpopulations.

  4. A steady state solution for ditch drainage problem with special reference to seepage face and unsaturated zone flow contribution: Derivation of a new drainage spacing eqaution

    Science.gov (United States)

    Yousfi, Ammar; Mechergui, Mohammed

    2016-04-01

    The seepage face is an important feature of the drainage process when recharge occurs to a permeable region with lateral outlets. Examples of the formation of a seepage face above the downstream water level include agricultural land drained by ditches. Flow problem to these drains has been investigated extensively by many researchers (e.g. Rubin, 1968; Hornberger et al. 1969; Verma and Brutsaert, 1970; Gureghian and Youngs, 1975; Vauclin et al., 1975; Skaggs and Tang, 1976; Youngs, 1990; Gureghian, 1981; Dere, 2000; Rushton and Youngs, 2010; Youngs, 2012; Castro-Orgaz et al., 2012) and may be tackled either using variably saturated flow models, or the complete 2-D solution of Laplace equation, or using the Dupuit-Forchheimer approximation; the most widely accepted methods to obtain analytical solutions for unconfined drainage problems. However, the investigation reported by Clement et al. (1996) suggest that accounting for the seepage face alone, as in the fully saturated flow model, does not improve the discharge estimate because of disregarding flow the unsaturated zone flow contribution. This assumption can induce errors in the location of the water table surface and results in an underestimation of the seepage face and the net discharge (e.g. Skaggs and Tang, 1976; Vauclin et al., 1979; Clement et al., 1996). The importance of the flow in the unsaturated zone has been highlighted by many authors on the basis of laboratory experiments and/or numerical experimentations (e.g. Rubin, 1968; Verma and Brutsaert, 1970; Todsen, 1973; Vauclin et al., 1979; Ahmad et al., 1993; Anguela, 2004; Luthin and Day, 1955; Shamsai and Narasimhan, 1991; Wise et al., 1994; Clement et al., 1996; Boufadel et al., 1999; Romano et al., 1999; Kao et al., 2001; Kao, 2002). These studies demonstrate the failure of fully saturated flow models and suggested that the error made when using these models not only depends on soil properties but also on the infiltration rate as reported by Kao et

  5. Effective flow-accelerated corrosion programs in nuclear facilities

    International Nuclear Information System (INIS)

    Esselman, Thomas C.; McBrine, William J.

    2004-01-01

    Piping Flow-Accelerated Corrosion Programs in nuclear power generation facilities are classically comprised of the selection of inspection locations with the assistance of a predictive methodology such as the Electric Power Research Institute computer codes CHECMATE or CHECWORKS, performing inspections, conducting structural evaluations on the inspected components, and implementing the appropriate sample expansion and corrective actions. Performing such a sequence of steps can be effective in identifying thinned components and implementing appropriate short term and long term actions necessary to resolve flow-accelerated corrosion related problems. A maximally effective flow-accelerated corrosion (FAC) program requires an understanding of many programmatic details. These include the procedural control of the program, effective use of historical information, managing the activities performed during a limited duration outage, allocating resources based on risk allocation, having an acute awareness of how the plant is operated, investigating components removed from the plant, and several others. This paper will describe such details and methods that will lead to a flow-accelerated corrosion program that effectively minimizes the risk of failure due to flow-accelerated corrosion and provide full and complete documentation of the program. (author)

  6. Quantification of regional cerebral blood flow (rCBF) measurement with one point sampling by sup 123 I-IMP SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Munaka, Masahiro [University of Occupational and Enviromental Health, Kitakyushu (Japan); Iida, Hidehiro; Murakami, Matsutaro

    1992-02-01

    A handy method of quantifying regional cerebral blood flow (rCBF) measurement by {sup 123}I-IMP SPECT was designed. A standard input function was made and the sampling time to calibrate this standard input function by one point sampling was optimized. An average standard input function was obtained from continuous arterial samplings of 12 healthy adults. The best sampling time was the minimum differential value between the integral calculus value of the standard input function calibrated by one point sampling and the input funciton by continuous arterial samplings. This time was 8 minutes after an intravenous injection of {sup 123}I-IMP and an error was estimated to be {+-}4.1%. The rCBF values by this method were evaluated by comparing them with the rCBF values of the input function with continuous arterial samplings in 2 healthy adults and a patient with cerebral infarction. A significant correlation (r=0.764, p<0.001) was obtained between both. (author).

  7. Hardness and Approximation for Network Flow Interdiction

    OpenAIRE

    Chestnut, Stephen R.; Zenklusen, Rico

    2015-01-01

    In the Network Flow Interdiction problem an adversary attacks a network in order to minimize the maximum s-t-flow. Very little is known about the approximatibility of this problem despite decades of interest in it. We present the first approximation hardness, showing that Network Flow Interdiction and several of its variants cannot be much easier to approximate than Densest k-Subgraph. In particular, any $n^{o(1)}$-approximation algorithm for Network Flow Interdiction would imply an $n^{o(1)}...

  8. A Lateral Flow Strip Based Aptasensor for Detection of Ochratoxin A in Corn Samples

    Directory of Open Access Journals (Sweden)

    Guilan Zhang

    2018-01-01

    Full Text Available Ochratoxin A (OTA is a mycotoxin identified as a contaminant in grains and wine throughout the world, and convenient, rapid and sensitive detection methods for OTA have been a long-felt need for food safety monitoring. Herein, we presented a new competitive format based lateral flow strip fluorescent aptasensor for one-step determination of OTA in corn samples. Briefly, biotin-cDNA was immobilized on the surface of a nitrocellulose filter on the test line. Without OTA, Cy5-labeled aptamer combined with complementary strands formed a stable double helix. In the presence of OTA, however, the Cy5-aptamer/OTA complexes were generated, and therefore less free aptamer was captured in the test zone, leading to an obvious decrease in fluorescent signals on the test line. The test strip showed an excellent linear relationship in the range from 1 ng·mL−1 to 1000 ng·mL−1 with the LOD of 0.40 ng·mL−1, IC15 value of 3.46 ng·mL−1 and recoveries from 96.4% to 104.67% in spiked corn samples. Thus, the strip sensor developed in this study is an acceptable alternative for rapid detection of the OTA level in grain samples.

  9. Weighted piecewise LDA for solving the small sample size problem in face verification.

    Science.gov (United States)

    Kyperountas, Marios; Tefas, Anastasios; Pitas, Ioannis

    2007-03-01

    A novel algorithm that can be used to boost the performance of face-verification methods that utilize Fisher's criterion is presented and evaluated. The algorithm is applied to similarity, or matching error, data and provides a general solution for overcoming the "small sample size" (SSS) problem, where the lack of sufficient training samples causes improper estimation of a linear separation hyperplane between the classes. Two independent phases constitute the proposed method. Initially, a set of weighted piecewise discriminant hyperplanes are used in order to provide a more accurate discriminant decision than the one produced by the traditional linear discriminant analysis (LDA) methodology. The expected classification ability of this method is investigated throughout a series of simulations. The second phase defines proper combinations for person-specific similarity scores and describes an outlier removal process that further enhances the classification ability. The proposed technique has been tested on the M2VTS and XM2VTS frontal face databases. Experimental results indicate that the proposed framework greatly improves the face-verification performance.

  10. Speciation of mercury in fish samples by flow injection catalytic cold vapour atomic absorption spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Yanlin [NanoScience and Sensor Technology Research Group, School of Applied Sciences and Engineering, Monash University, Churchill, Victoria 3842 (Australia); Adeloju, Samuel B., E-mail: Sam.Adeloju@monash.edu [NanoScience and Sensor Technology Research Group, School of Applied Sciences and Engineering, Monash University, Churchill, Victoria 3842 (Australia)

    2012-04-06

    Highlights: Black-Right-Pointing-Pointer Successful speciation of inorganic and organic Hg with Fe{sup 3+}, Cu{sup 2+} and thiourea as catalysts. Black-Right-Pointing-Pointer Best sensitivity enhancement and similar sensitivity for MeHg and Hg{sup 2+} with Fe{sup 3+}. Black-Right-Pointing-Pointer Successful use of Hg{sup 2+} as the primary standard for quantification of inorganic and total-Hg. Black-Right-Pointing-Pointer Quantitative extraction of Hg and MeHg with 2 M HCl which contained thiourea. Black-Right-Pointing-Pointer Integration with FIA for rapid analysis with a sample throughput of 180 h{sup -1}. - Abstract: A rapid flow injection catalytic cold vapour atomic absorption spectrometric (FI-CCV-AAS) method is described for speciation and determination of mercury in biological samples. Varying concentrations of NaBH{sub 4} were employed for mercury vapour generation from inorganic and mixture of inorganic and organic (total) Hg. The presence of Fe{sup 3+}, Cu{sup 2+} and thiourea had catalytic effect on mercury vapour generation from methylmercury (MeHg) and, when together, Cu{sup 2+} and thiourea had synergistic catalytic effect on the vapour generation. Of the two metal ions, Fe{sup 3+} gave the best sensitivity enhancement, achieving the same sensitivity for MeHg and inorganic Hg{sup 2+}. Due to similarity of resulting sensitivity, Hg{sup 2+} was used successfully as a primary standard for quantification of inorganic and total Hg. The catalysis was homogeneous in nature, and it was assumed that the breaking of the C-Hg bond was facilitated by the delocalization of the 5d electron pairs in Hg atom. The extraction of MeHg and inorganic mercury (In-Hg) in fish samples were achieved quantitatively with hydrochloric acid in the presence of thiourea and determined by FI-CCV-AAS. The application of the method to the quantification of mercury species in a fish liver reference material DOLT-4 gave 91.5% and 102.3% recoveries for total and methyl mercury

  11. PACTOLUS, Nuclear Power Plant Cost and Economics by Discounted Cash Flow Method. CLOTHO, Mass Flow Data Calculation for Program PACTOLUS

    International Nuclear Information System (INIS)

    Haffner, D.R.

    1976-01-01

    1 - Description of problem or function: PACTOLUS is a code for computing nuclear power costs using the discounted cash flow method. The cash flows are generated from input unit costs, time schedules and burnup data. CLOTHO calculates and communicates to PACTOLUS mass flow data to match a specified load factor history. 2 - Method of solution: Plant lifetime power costs are calculated using the discounted cash flow method. 3 - Restrictions on the complexity of the problem - Maxima of: 40 annual time periods into which all costs and mass flows are accumulated, 20 isotopic mass flows charged into and discharged from the reactor model

  12. Superposition Enhanced Nested Sampling

    Directory of Open Access Journals (Sweden)

    Stefano Martiniani

    2014-08-01

    Full Text Available The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

  13. Singularities in Free Surface Flows

    Science.gov (United States)

    Thete, Sumeet Suresh

    Free surface flows where the shape of the interface separating two or more phases or liquids are unknown apriori, are commonplace in industrial applications and nature. Distribution of drop sizes, coalescence rate of drops, and the behavior of thin liquid films are crucial to understanding and enhancing industrial practices such as ink-jet printing, spraying, separations of chemicals, and coating flows. When a contiguous mass of liquid such as a drop, filament or a film undergoes breakup to give rise to multiple masses, the topological transition is accompanied with a finite-time singularity . Such singularity also arises when two or more masses of liquid merge into each other or coalesce. Thus the dynamics close to singularity determines the fate of about-to-form drops or films and applications they are involved in, and therefore needs to be analyzed precisely. The primary goal of this thesis is to resolve and analyze the dynamics close to singularity when free surface flows experience a topological transition, using a combination of theory, experiments, and numerical simulations. The first problem under consideration focuses on the dynamics following flow shut-off in bottle filling applications that are relevant to pharmaceutical and consumer products industry, using numerical techniques based on Galerkin Finite Element Methods (GFEM). The second problem addresses the dual flow behavior of aqueous foams that are observed in oil and gas fields and estimates the relevant parameters that describe such flows through a series of experiments. The third problem aims at understanding the drop formation of Newtonian and Carreau fluids, computationally using GFEM. The drops are formed as a result of imposed flow rates or expanding bubbles similar to those of piezo actuated and thermal ink-jet nozzles. The focus of fourth problem is on the evolution of thinning threads of Newtonian fluids and suspensions towards singularity, using computations based on GFEM and experimental

  14. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM+-up scheme

    International Nuclear Information System (INIS)

    Chang, Chih-Hao; Liou, Meng-Sing

    2007-01-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM + -up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion

  15. Choosing a suitable sample size in descriptive sampling

    International Nuclear Information System (INIS)

    Lee, Yong Kyun; Choi, Dong Hoon; Cha, Kyung Joon

    2010-01-01

    Descriptive sampling (DS) is an alternative to crude Monte Carlo sampling (CMCS) in finding solutions to structural reliability problems. It is known to be an effective sampling method in approximating the distribution of a random variable because it uses the deterministic selection of sample values and their random permutation,. However, because this method is difficult to apply to complex simulations, the sample size is occasionally determined without thorough consideration. Input sample variability may cause the sample size to change between runs, leading to poor simulation results. This paper proposes a numerical method for choosing a suitable sample size for use in DS. Using this method, one can estimate a more accurate probability of failure in a reliability problem while running a minimal number of simulations. The method is then applied to several examples and compared with CMCS and conventional DS to validate its usefulness and efficiency

  16. An efficient genetic algorithm for a hybrid flow shop scheduling problem with time lags and sequence-dependent setup time

    Directory of Open Access Journals (Sweden)

    Farahmand-Mehr Mohammad

    2014-01-01

    Full Text Available In this paper, a hybrid flow shop scheduling problem with a new approach considering time lags and sequence-dependent setup time in realistic situations is presented. Since few works have been implemented in this field, the necessity of finding better solutions is a motivation to extend heuristic or meta-heuristic algorithms. This type of production system is found in industries such as food processing, chemical, textile, metallurgical, printed circuit board, and automobile manufacturing. A mixed integer linear programming (MILP model is proposed to minimize the makespan. Since this problem is known as NP-Hard class, a meta-heuristic algorithm, named Genetic Algorithm (GA, and three heuristic algorithms (Johnson, SPTCH and Palmer are proposed. Numerical experiments of different sizes are implemented to evaluate the performance of presented mathematical programming model and the designed GA in compare to heuristic algorithms and a benchmark algorithm. Computational results indicate that the designed GA can produce near optimal solutions in a short computational time for different size problems.

  17. Continuous-flow leaching studies of crushed and cored SYNROC

    International Nuclear Information System (INIS)

    Coles, D.G.; Bazan, F.

    1980-01-01

    Both crushed (150 to 300 μm) and cored 1.8 mm diameter) samples of SYNROC have been leached with the single-pass continuous-flow leaching equipment. Crushed samples of Cs-hollandite were also leached in a similar experiment. Temperatures used were 25 0 C and 75 0 C and leachates were 0.03 N NaHCO 3 and distilled water. Leaching rates from SYNROC C were ranked Cs > Sr greater than or equal to Ca > Ba > Zr. A comparison of leaching rates is made between crushed SYNROC, cored SYNROC, and PNL 76-68 glass beads. Problems encountered when comparing the leaching rates of different waste forms are discussed

  18. A Singular Perturbation Problem for Steady State Conversion of Methane Oxidation in Reverse Flow Reactor

    Directory of Open Access Journals (Sweden)

    Aang Nuryaman

    2012-11-01

    Full Text Available The governing equations describing the methane oxidation process in reverse flow reactor are given by a set of convective-diffusion equations with a nonlinear reaction term, where temperature and methane conversion are dependent variables. In this study, the process is assumed to be one-dimensional pseudo homogeneous model and takes place with a certain reaction rate in which the whole process of reactor is still workable. Thus, the reaction rate can proceed at a fixed temperature. Under this condition, we restrict ourselves to solve the equations for the conversion only. From the available data, it turns out that the ratio of the diffusion term to the reaction term is small. Hence, this ratio is considered as small parameter in our model and this leads to a singular perturbation problem. In the vicinity of small parameter in front of higher order term, the numerical difficulties will be found. Here, we present an analytical solution by means of matched asymptotic expansions. Result shows that, up to and including the first order of approximation, the solution is in agreement with the exact and numerical solutions of the boundary value problem.

  19. Artificial Bee Colony Algorithm Based on K-Means Clustering for Multiobjective Optimal Power Flow Problem

    Directory of Open Access Journals (Sweden)

    Liling Sun

    2015-01-01

    Full Text Available An improved multiobjective ABC algorithm based on K-means clustering, called CMOABC, is proposed. To fasten the convergence rate of the canonical MOABC, the way of information communication in the employed bees’ phase is modified. For keeping the population diversity, the multiswarm technology based on K-means clustering is employed to decompose the population into many clusters. Due to each subcomponent evolving separately, after every specific iteration, the population will be reclustered to facilitate information exchange among different clusters. Application of the new CMOABC on several multiobjective benchmark functions shows a marked improvement in performance over the fast nondominated sorting genetic algorithm (NSGA-II, the multiobjective particle swarm optimizer (MOPSO, and the multiobjective ABC (MOABC. Finally, the CMOABC is applied to solve the real-world optimal power flow (OPF problem that considers the cost, loss, and emission impacts as the objective functions. The 30-bus IEEE test system is presented to illustrate the application of the proposed algorithm. The simulation results demonstrate that, compared to NSGA-II, MOPSO, and MOABC, the proposed CMOABC is superior for solving OPF problem, in terms of optimization accuracy.

  20. Minimizing total weighted completion time in a proportionate flow shop

    NARCIS (Netherlands)

    Shakhlevich, N.V.; Hoogeveen, J.A.; Pinedo, M.L.

    1998-01-01

    We study the special case of the m machine flow shop problem in which the processing time of each operation of job j is equal to pj; this variant of the flow shop problem is known as the proportionate flow shop problem. We show that for any number of machines and for any regular performance

  1. Eddy-current flow rate meter for measuring sodium flow rates

    International Nuclear Information System (INIS)

    Knaak, J.

    1976-01-01

    For safety reasons flow rate meters for monitoring coolant flow rates are inserted in the core of sodium-cooled fast breeder reactors. These are so-called eddy-current flow rate meters which can be mounted directly above the fuel elements. In the present contribution the principle of measurement, the mechanical construction and the circuit design of the flow rate measuring device are described. Special problems and their solution on developing the measuring system are pointed out. Finally, results of measurement and experience with the apparatus in several experiments are reported, where also further possibilities of application were tested. (orig./TK) [de

  2. Online traffic flow model applying dynamic flow-density relation

    International Nuclear Information System (INIS)

    Kim, Y.

    2002-01-01

    traffic states by employing fuzzy logic and the shock wave theory. The model is extended to describe also the propagation of congestion in the motorway sections with ramps by considering the capacity reduction caused by the interaction between the traffic flow of the mainstream and the ramps. This research represents the potential of the macroscopic traffic flow models for the application to online traffic control systems by applying the dynamic flow-density relation. The new modelling approach alleviates a critical problem, i.e. the parameter calibration problem, of existing traffic flow models. (orig.)

  3. Two Topics in Data Analysis: Sample-based Optimal Transport and Analysis of Turbulent Spectra from Ship Track Data

    Science.gov (United States)

    Kuang, Simeng Max

    among all admissible affine maps. The procedure can be used on both continuous measures and finite sample sets from distributions. In numerical examples, the procedure is applied to multivariate normal distributions, to a two-dimensional shape transform problem and to color transfer problems. For the second topic, we present an extension to anisotropic flows of the recently developed Helmholtz and wave-vortex decomposition method for one-dimensional spectra measured along ship or aircraft tracks in Buhler et al. (J. Fluid Mech., vol. 756, 2014, pp. 1007-1026). While in the original method the flow was assumed to be homogeneous and isotropic in the horizontal plane, we allow the flow to have a simple kind of horizontal anisotropy that is chosen in a self-consistent manner and can be deduced from the one-dimensional power spectra of the horizontal velocity fields and their cross-correlation. The key result is that an exact and robust Helmholtz decomposition of the horizontal kinetic energy spectrum can be achieved in this anisotropic flow setting, which then also allows the subsequent wave-vortex decomposition step. The new method is developed theoretically and tested with encouraging results on challenging synthetic data as well as on ocean data from the Gulf Stream.

  4. Gender and Direction of Effect of Alcohol Problems and Internalizing Symptoms in a Longitudinal Sample of College Students.

    Science.gov (United States)

    Homman, Lina E; Edwards, Alexis C; Cho, Seung Bin; Dick, Danielle M; Kendler, Kenneth S

    2017-03-21

    Alcohol problems and internalizing symptoms are consistently found to be associated but how they relate to each other is unclear. The present study aimed to address limitations in the literature of comorbidity of alcohol problems and internalizing symptoms by investigating the direction of effect between the phenotypes and possible gender differences in college students. We utilized data from a large longitudinal study of college students from the United States (N = 2607). Three waves of questionnaire-based data were collected over the first two years of college (in 2011-2013). Cross-lagged models were applied to examine the possible direction of effect of internalizing symptoms and alcohol problems. Possible effects of gender were investigated using multigroup modeling. There were significant correlations between alcohol problems and internalizing symptoms. A direction of effect was found between alcohol problems and internalizing symptoms but differed between genders. A unidirectional relationship varying with age was identified for males where alcohol problems initially predicted internalizing symptoms followed by internalizing symptoms predicting alcohol problems. For females, a unidirectional relationship existed wherein alcohol problems predicted internalizing symptoms. Conclusions/Importance: We conclude that the relationship between alcohol problems and internalizing symptoms is complex and differ between genders. In males, both phenotypes are predictive of each other, while in females the relationship is driven by alcohol problems. Importantly, our study examines a population-based sample, revealing that the observed relationships between alcohol problems and internalizing symptoms are not limited to individuals with clinically diagnosed mental health or substance use problems.

  5. Pengaruh Free Cash Flow Dan Kualitas Audit Terhadap Manajemen Laba

    Directory of Open Access Journals (Sweden)

    Dian Agustia

    2013-04-01

    Full Text Available Asymmetric information refers to a situation where one party has more information than the other party. The agency problems arise from asymmetric information in the principal agent contracts. In addition, there are also several factors that could affect earnings management that is free cash flow and audit quality. The aim of this research is to provide empirical evidence about the impact of free cash flow and audit quality variables on discretionary accruals, as a measure of Earnings Management with the control variables company’s size. This research used 103 manufacturing companies listed in Indonesia Stock Exchange, selected using purposive sampling method, during the research period 2007-2011. Data were analyzed using multiple regression method. Based on the result of analysis concluced that the variable independent free cash flow have a negative and significant effect on earning management. It means that companies with high free cash flow will restrict the practice of earnings management. While the audit quality no significance effect on earning management.

  6. Traffic Management as a Service: The Traffic Flow Pattern Classification Problem

    Directory of Open Access Journals (Sweden)

    Carlos T. Calafate

    2015-01-01

    Full Text Available Intelligent Transportation System (ITS technologies can be implemented to reduce both fuel consumption and the associated emission of greenhouse gases. However, such systems require intelligent and effective route planning solutions to reduce travel time and promote stable traveling speeds. To achieve such goal these systems should account for both estimated and real-time traffic congestion states, but obtaining reliable traffic congestion estimations for all the streets/avenues in a city for the different times of the day, for every day in a year, is a complex task. Modeling such a tremendous amount of data can be time-consuming and, additionally, centralized computation of optimal routes based on such time-dependencies has very high data processing requirements. In this paper we approach this problem through a heuristic to considerably reduce the modeling effort while maintaining the benefits of time-dependent traffic congestion modeling. In particular, we propose grouping streets by taking into account real traces describing the daily traffic pattern. The effectiveness of this heuristic is assessed for the city of Valencia, Spain, and the results obtained show that it is possible to reduce the required number of daily traffic flow patterns by a factor of 4210 while maintaining the essence of time-dependent modeling requirements.

  7. Obtaining Samples Representative of Contaminant Distribution in an Aquifer

    International Nuclear Information System (INIS)

    Schalla, Ronald; Spane, Frank A.; Narbutovskih, Susan M.; Conley, Scott F.; Webber, William D.

    2002-01-01

    Historically, groundwater samples collected from monitoring wells have been assumed to provide average indications of contaminant concentrations within the aquifer over the well-screen interval. In-well flow circulation, heterogeneity in the surrounding aquifer, and the sampling method utilized, however, can significantly impact the representativeness of samples as contaminant indicators of actual conditions within the surrounding aquifer. This paper identifies the need and approaches essential for providing cost-effective and technically meaningful groundwater-monitoring results. Proper design of the well screen interval is critical. An accurate understanding of ambient (non-pumping) flow conditions within the monitoring well is essential for determining the contaminant distribution within the aquifer. The ambient in-well flow velocity, flow direction and volumetric flux rate are key to this understanding. Not only do the ambient flow conditions need to be identified for preferential flow zones, but also the probable changes that will be imposed under dynamic conditions that occur during groundwater sampling. Once the in-well flow conditions are understood, effective sampling can be conducted to obtain representative samples for specific depth zones or zones of interest. The question of sample representativeness has become an important issue as waste minimization techniques such as low flow purging and sampling are implemented to combat the increasing cost of well purging and sampling at many hazardous waste sites. Several technical approaches (e.g., well tracer techniques and flowmeter surveys) can be used to determine in-well flow conditions, and these are discussed with respect to both their usefulness and limitations. Proper fluid extraction methods using minimal, (low) volume and no purge sampling methods that are used to obtain representative samples of aquifer conditions are presented

  8. Electrokinetic control of sample splitting at a channel bifurcation using isotachophoresis

    International Nuclear Information System (INIS)

    Persat, Alexandre; Santiago, Juan G

    2009-01-01

    We present a novel method for accurately splitting ionic samples at microchannel bifurcations. We leverage isotachophoresis (ITP) to focus and transport sample through a one-inlet, two-outlet microchannel bifurcation. We actively control the proportion of splitting by controlling potentials at end-channel reservoirs (and thereby controlling the current ratio). We explore the effect of buffer chemistry and local electric field on splitting dynamics and propose and validate a simple Kirchoff-type rule controlling the split ratio. We explore the effects of large applied electric fields on sample splitting and attribute a loss of splitting accuracy to electrohydrodynamic instabilities. We propose a scaling analysis to characterize the onset of this instability. This scaling is potentially useful for other electrokinetic flow problems with self-sharpening interfaces.

  9. A separation theorem for the stochastic sampled-data LQG problem. [control of continuous linear plant disturbed by white noise

    Science.gov (United States)

    Halyo, N.; Caglayan, A. K.

    1976-01-01

    This paper considers the control of a continuous linear plant disturbed by white plant noise when the control is constrained to be a piecewise constant function of time; i.e. a stochastic sampled-data system. The cost function is the integral of quadratic error terms in the state and control, thus penalizing errors at every instant of time while the plant noise disturbs the system continuously. The problem is solved by reducing the constrained continuous problem to an unconstrained discrete one. It is shown that the separation principle for estimation and control still holds for this problem when the plant disturbance and measurement noise are Gaussian.

  10. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    Science.gov (United States)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one

  11. Design and Use of a Full Flow Sampling System (FFS) for the Quantification of Methane Emissions.

    Science.gov (United States)

    Johnson, Derek R; Covington, April N; Clark, Nigel N

    2016-06-12

    The use of natural gas continues to grow with increased discovery and production of unconventional shale resources. At the same time, the natural gas industry faces continued scrutiny for methane emissions from across the supply chain, due to methane's relatively high global warming potential (25-84x that of carbon dioxide, according to the Energy Information Administration). Currently, a variety of techniques of varied uncertainties exists to measure or estimate methane emissions from components or facilities. Currently, only one commercial system is available for quantification of component level emissions and recent reports have highlighted its weaknesses. In order to improve accuracy and increase measurement flexibility, we have designed, developed, and implemented a novel full flow sampling system (FFS) for quantification of methane emissions and greenhouse gases based on transportation emissions measurement principles. The FFS is a modular system that consists of an explosive-proof blower(s), mass airflow sensor(s) (MAF), thermocouple, sample probe, constant volume sampling pump, laser based greenhouse gas sensor, data acquisition device, and analysis software. Dependent upon the blower and hose configuration employed, the current FFS is able to achieve a flow rate ranging from 40 to 1,500 standard cubic feet per minute (SCFM). Utilization of laser-based sensors mitigates interference from higher hydrocarbons (C2+). Co-measurement of water vapor allows for humidity correction. The system is portable, with multiple configurations for a variety of applications ranging from being carried by a person to being mounted in a hand drawn cart, on-road vehicle bed, or from the bed of utility terrain vehicles (UTVs). The FFS is able to quantify methane emission rates with a relative uncertainty of ± 4.4%. The FFS has proven, real world operation for the quantification of methane emissions occurring in conventional and remote facilities.

  12. Design and Use of a Full Flow Sampling System (FFS) for the Quantification of Methane Emissions

    Science.gov (United States)

    Johnson, Derek R.; Covington, April N.; Clark, Nigel N.

    2016-01-01

    The use of natural gas continues to grow with increased discovery and production of unconventional shale resources. At the same time, the natural gas industry faces continued scrutiny for methane emissions from across the supply chain, due to methane's relatively high global warming potential (25-84x that of carbon dioxide, according to the Energy Information Administration). Currently, a variety of techniques of varied uncertainties exists to measure or estimate methane emissions from components or facilities. Currently, only one commercial system is available for quantification of component level emissions and recent reports have highlighted its weaknesses. In order to improve accuracy and increase measurement flexibility, we have designed, developed, and implemented a novel full flow sampling system (FFS) for quantification of methane emissions and greenhouse gases based on transportation emissions measurement principles. The FFS is a modular system that consists of an explosive-proof blower(s), mass airflow sensor(s) (MAF), thermocouple, sample probe, constant volume sampling pump, laser based greenhouse gas sensor, data acquisition device, and analysis software. Dependent upon the blower and hose configuration employed, the current FFS is able to achieve a flow rate ranging from 40 to 1,500 standard cubic feet per minute (SCFM). Utilization of laser-based sensors mitigates interference from higher hydrocarbons (C2+). Co-measurement of water vapor allows for humidity correction. The system is portable, with multiple configurations for a variety of applications ranging from being carried by a person to being mounted in a hand drawn cart, on-road vehicle bed, or from the bed of utility terrain vehicles (UTVs). The FFS is able to quantify methane emission rates with a relative uncertainty of ± 4.4%. The FFS has proven, real world operation for the quantification of methane emissions occurring in conventional and remote facilities. PMID:27341646

  13. Direct impact aerosol sampling by electrostatic precipitation

    Science.gov (United States)

    Braden, Jason D.; Harter, Andrew G.; Stinson, Brad J.; Sullivan, Nicholas M.

    2016-02-02

    The present disclosure provides apparatuses for collecting aerosol samples by ionizing an air sample at different degrees. An air flow is generated through a cavity in which at least one corona wire is disposed and electrically charged to form a corona therearound. At least one grounded sample collection plate is provided downstream of the at least one corona wire so that aerosol ions generated within the corona are deposited on the at least one grounded sample collection plate. A plurality of aerosol samples ionized to different degrees can be generated. The at least one corona wire may be perpendicular to the direction of the flow, or may be parallel to the direction of the flow. The apparatus can include a serial connection of a plurality of stages such that each stage is capable of generating at least one aerosol sample, and the air flow passes through the plurality of stages serially.

  14. Dynamics and rheology under continuous shear flow studied by x-ray photon correlation spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Fluerasu, Andrei [Brookhaven National Laboratory, NSLS-II, Upton, NY 11973 (United States); Kwasniewski, Pawel; Caronna, Chiara; Madsen, Anders [European Synchrotron Radiation Facility, ID10 (Troika), Grenoble 38043 (France); Destremaut, Fanny; Salmon, Jean-Baptiste [LOF, UMR 5258 CNRS-Rhodia Bordeaux 1, 33608 Pessac (France)], E-mail: fluerasu@bnl.gov

    2010-03-15

    X-ray photon correlation spectroscopy (XPCS) has emerged as a unique technique allowing the measurement of dynamics of materials on mesoscopic lengthscales. One of the most common problems associated with the use of bright x-ray beams is beam-induced radiation damage, and this is likely to become an even more limiting factor at future synchrotron and free-electron laser sources. Flowing the sample during data acquisition is one of the simplest methods allowing the radiation damage to be limited. In addition to distributing the dose over many different scatterers, the method also enables new functionalities such as time-resolved studies. Here, we further develop a recently proposed experimental technique that combines XPCS and continuously flowing samples. More specifically, we use a model colloidal suspension to show how the macroscopic advective response to flow and the microscopic dissipative dynamics (diffusion) can be quantified from the x-ray data. Our results show very good quantitative agreement with a Poisseuille-flow hydrodynamical model combined with Brownian mechanics. The method has many potential applications, e.g. in the study of dynamics of glasses and gels under continuous shear/flow, protein aggregation processes and the interplay between dynamics and rheology in complex fluids.

  15. Assessing lateral flows and solute transport during floods in a conduit-flow-dominated karst system using the inverse problem for the advection-diffusion equation

    Science.gov (United States)

    Cholet, Cybèle; Charlier, Jean-Baptiste; Moussa, Roger; Steinmann, Marc; Denimal, Sophie

    2017-07-01

    The aim of this study is to present a framework that provides new ways to characterize the spatio-temporal variability of lateral exchanges for water flow and solute transport in a karst conduit network during flood events, treating both the diffusive wave equation and the advection-diffusion equation with the same mathematical approach, assuming uniform lateral flow and solute transport. A solution to the inverse problem for the advection-diffusion equations is then applied to data from two successive gauging stations to simulate flows and solute exchange dynamics after recharge. The study site is the karst conduit network of the Fourbanne aquifer in the French Jura Mountains, which includes two reaches characterizing the network from sinkhole to cave stream to the spring. The model is applied, after separation of the base from the flood components, on discharge and total dissolved solids (TDSs) in order to assess lateral flows and solute concentrations and compare them to help identify water origin. The results showed various lateral contributions in space - between the two reaches located in the unsaturated zone (R1), and in the zone that is both unsaturated and saturated (R2) - as well as in time, according to hydrological conditions. Globally, the two reaches show a distinct response to flood routing, with important lateral inflows on R1 and large outflows on R2. By combining these results with solute exchanges and the analysis of flood routing parameters distribution, we showed that lateral inflows on R1 are the addition of diffuse infiltration (observed whatever the hydrological conditions) and localized infiltration in the secondary conduit network (tributaries) in the unsaturated zone, except in extreme dry periods. On R2, despite inflows on the base component, lateral outflows are observed during floods. This pattern was attributed to the concept of reversal flows of conduit-matrix exchanges, inducing a complex water mixing effect in the saturated zone

  16. Assessing lateral flows and solute transport during floods in a conduit-flow-dominated karst system using the inverse problem for the advection–diffusion equation

    Directory of Open Access Journals (Sweden)

    C. Cholet

    2017-07-01

    Full Text Available The aim of this study is to present a framework that provides new ways to characterize the spatio-temporal variability of lateral exchanges for water flow and solute transport in a karst conduit network during flood events, treating both the diffusive wave equation and the advection–diffusion equation with the same mathematical approach, assuming uniform lateral flow and solute transport. A solution to the inverse problem for the advection–diffusion equations is then applied to data from two successive gauging stations to simulate flows and solute exchange dynamics after recharge. The study site is the karst conduit network of the Fourbanne aquifer in the French Jura Mountains, which includes two reaches characterizing the network from sinkhole to cave stream to the spring. The model is applied, after separation of the base from the flood components, on discharge and total dissolved solids (TDSs in order to assess lateral flows and solute concentrations and compare them to help identify water origin. The results showed various lateral contributions in space – between the two reaches located in the unsaturated zone (R1, and in the zone that is both unsaturated and saturated (R2 – as well as in time, according to hydrological conditions. Globally, the two reaches show a distinct response to flood routing, with important lateral inflows on R1 and large outflows on R2. By combining these results with solute exchanges and the analysis of flood routing parameters distribution, we showed that lateral inflows on R1 are the addition of diffuse infiltration (observed whatever the hydrological conditions and localized infiltration in the secondary conduit network (tributaries in the unsaturated zone, except in extreme dry periods. On R2, despite inflows on the base component, lateral outflows are observed during floods. This pattern was attributed to the concept of reversal flows of conduit–matrix exchanges, inducing a complex water mixing effect

  17. Effect of flow conditions on flow accelerated corrosion in pipe bends

    International Nuclear Information System (INIS)

    Mazhar, H.; Ching, C.Y.

    2015-01-01

    Flow Accelerated Corrosion (FAC) in piping systems is a safety and reliability problem in the nuclear industry. In this study, the pipe wall thinning rates and development of surface roughness in pipe bends are compared for single phase and two phase annular flow conditions. The FAC rates were measured using the dissolution of test sections cast from gypsum in water with a Schmidt number of 1280. The change in location and levels of maximum FAC under single phase and two phase flow conditions are examined. The comparison of the relative roughness indicates a higher effect for the surface roughness in single phase flow than in two phase flow. (author)

  18. Target problem (mis) matching: predictors and consequences of parent-youth agreement in a sample of anxious youth.

    Science.gov (United States)

    Hoffman, Lauren J; Chu, Brian C

    2015-04-01

    Parents and youth often report discrepant target problems upon seeking treatment for youth psychopathology, which can have important impact on therapy processes (e.g., dropout) and treatment outcomes, as entry-level attitudes have been found to be influential in ultimate use and benefit of treatment. The current study examined parent-youth agreement within an anxiety disordered sample by assessing demographic and diagnostic factors that may predict matching, as well as the impact of matching on attrition, treatment outcome, and parental satisfaction. Ninety-five youth with principal anxiety disorders received cognitive-behavioral treatment for anxiety at a university outpatient clinic. Youth and parents independently identified target problems during the pretreatment assessment. Target problems were coded into 25 qualitative categories representing diagnostic, symptom, and functional impairment domains, including diffuse anxiety, social anxiety, academic achievement, oppositional/behavior problems, sleep problems, suicidal ideation, and family functioning. The majority of parent-youth dyads (67.4%) agreed on at least one target problem. Although problems related to diffuse anxiety and social anxiety were reported most frequently, relatively low rates of agreement were found in these domains. Kappa values demonstrated higher levels of agreement for problems with specific fears, school attendance, and panic and lower levels of agreement for difficulties with worry, shame, and self-esteem. Further, youth diagnosed with comorbid externalizing disorders were less likely to agree with their parents on at least one target problem. No effects were found for gender, age, or number of diagnoses in predicting agreement. Target problem agreement did not significantly impact rates of attrition or diagnostic remission, but did predict some measures of parental satisfaction. Results suggest that disagreement on treatment goals exists even within a narrow treatment population and

  19. Genetic algorithm parameters tuning for resource-constrained project scheduling problem

    Science.gov (United States)

    Tian, Xingke; Yuan, Shengrui

    2018-04-01

    Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.

  20. Accelerated solution of non-linear flow problems using Chebyshev iteration polynomial based RK recursions

    Energy Technology Data Exchange (ETDEWEB)

    Lorber, A.A.; Carey, G.F.; Bova, S.W.; Harle, C.H. [Univ. of Texas, Austin, TX (United States)

    1996-12-31

    The connection between the solution of linear systems of equations by iterative methods and explicit time stepping techniques is used to accelerate to steady state the solution of ODE systems arising from discretized PDEs which may involve either physical or artificial transient terms. Specifically, a class of Runge-Kutta (RK) time integration schemes with extended stability domains has been used to develop recursion formulas which lead to accelerated iterative performance. The coefficients for the RK schemes are chosen based on the theory of Chebyshev iteration polynomials in conjunction with a local linear stability analysis. We refer to these schemes as Chebyshev Parameterized Runge Kutta (CPRK) methods. CPRK methods of one to four stages are derived as functions of the parameters which describe an ellipse {Epsilon} which the stability domain of the methods is known to contain. Of particular interest are two-stage, first-order CPRK and four-stage, first-order methods. It is found that the former method can be identified with any two-stage RK method through the correct choice of parameters. The latter method is found to have a wide range of stability domains, with a maximum extension of 32 along the real axis. Recursion performance results are presented below for a model linear convection-diffusion problem as well as non-linear fluid flow problems discretized by both finite-difference and finite-element methods.

  1. Extended forward sensitivity analysis of one-dimensional isothermal flow

    International Nuclear Information System (INIS)

    Johnson, M.; Zhao, H.

    2013-01-01

    Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)

  2. Forecasting freight flows

    DEFF Research Database (Denmark)

    Lyk-Jensen, Stéphanie

    2011-01-01

    Trade patterns and transport markets are changing as a result of the growth and globalization of international trade, and forecasting future freight flow has to rely on trade forecasts. Forecasting freight flows is critical for matching infrastructure supply to demand and for assessing investment...... constitute a valuable input to freight models for forecasting future capacity problems.......Trade patterns and transport markets are changing as a result of the growth and globalization of international trade, and forecasting future freight flow has to rely on trade forecasts. Forecasting freight flows is critical for matching infrastructure supply to demand and for assessing investment...

  3. Parallel Computation of Unsteady Flows on a Network of Workstations

    Science.gov (United States)

    1997-01-01

    Parallel computation of unsteady flows requires significant computational resources. The utilization of a network of workstations seems an efficient solution to the problem where large problems can be treated at a reasonable cost. This approach requires the solution of several problems: 1) the partitioning and distribution of the problem over a network of workstation, 2) efficient communication tools, 3) managing the system efficiently for a given problem. Of course, there is the question of the efficiency of any given numerical algorithm to such a computing system. NPARC code was chosen as a sample for the application. For the explicit version of the NPARC code both two- and three-dimensional problems were studied. Again both steady and unsteady problems were investigated. The issues studied as a part of the research program were: 1) how to distribute the data between the workstations, 2) how to compute and how to communicate at each node efficiently, 3) how to balance the load distribution. In the following, a summary of these activities is presented. Details of the work have been presented and published as referenced.

  4. Pulsatile flow in ventricular catheters for hydrocephalus

    Science.gov (United States)

    Giménez, Á.; Galarza, M.; Thomale, U.; Schuhmann, M. U.; Valero, J.; Amigó, J. M.

    2017-05-01

    The obstruction of ventricular catheters (VCs) is a major problem in the standard treatment of hydrocephalus, the flow pattern of the cerebrospinal fluid (CSF) being one important factor thereof. As a first approach to this problem, some of the authors studied previously the CSF flow through VCs under time-independent boundary conditions by means of computational fluid dynamics in three-dimensional models. This allowed us to derive a few basic principles which led to designs with improved flow patterns regarding the obstruction problem. However, the flow of the CSF has actually a pulsatile nature because of the heart beating and blood flow. To address this fact, here we extend our previous computational study to models with oscillatory boundary conditions. The new results will be compared with the results for constant flows and discussed. It turns out that the corrections due to the pulsatility of the CSF are quantitatively small, which reinforces our previous findings and conclusions. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  5. Hypersonic flow past slender bodies in dispersive hydrodynamics

    International Nuclear Information System (INIS)

    El, G.A.; Khodorovskii, V.V.; Tyurina, A.V.

    2004-01-01

    The problem of two-dimensional steady hypersonic flow past a slender body is formulated for dispersive media. It is shown that for the hypersonic flow, the original 2+0 boundary-value problem is asymptotically equivalent to the 1+1 piston problem for the fully nonlinear flow in the same physical system, which allows one to take advantage of the analytic methods developed for one-dimensional systems. This type of equivalence, well known in ideal Euler gas dynamics, has not been established for dispersive hydrodynamics so far. Two examples pertaining to collisionless plasma dynamics are considered

  6. Fluid flow and heat transfer in rotating porous media

    CERN Document Server

    Vadasz, Peter

    2016-01-01

    This Book concentrates the available knowledge on rotating fluid flow and heat transfer in porous media in one single reference. Dr. Vadasz develops the fundamental theory of rotating flow and heat transfer in porous media and introduces systematic classification and identification of the relevant problems. An initial distinction between rotating flows in isothermal heterogeneous porous systems and natural convection in homogeneous non-­‐isothermal porous systems provides the two major classes of problems to be considered. A few examples of solutions to selected problems are presented, highlighting the significant impact of rotation on the flow in porous media.

  7. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    Science.gov (United States)

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  8. Simultaneous Determination of Iron, Copper and Cobalt in Food Samples by CCD-diode Array Detection-Flow Injection Analysis with Partial Least Squares Calibration Model

    International Nuclear Information System (INIS)

    Mi Jiaping; Li Yuanqian; Zhou Xiaoli; Zheng Bo; Zhou Ying

    2006-01-01

    A flow injection-CCD diode array detection spectrophotometry with partial least squares (PLS) program for simultaneous determination of iron, copper and cobalt in food samples has been established. The method was based on the chromogenic reaction of the three metal ions and 2- (5-Bromo-2-pyridylazo)-5-diethylaminophenol, 5-Br-PADAP in acetic acid - sodium acetate buffer solution (pH5) with Triton X-100 and ascorbic acid. The overlapped spectra of the colored complexes were collected by charge-coupled device (CCD) - diode array detector and the multi-wavelength absorbance data was processed using partial least squares (PLS) algorithm. Optimum reaction conditions and parameters of flow injection analysis were investigated. The samples of tea, sesame, laver, millet, cornmeal, mung bean and soybean powder were determined by the proposed method. The average recoveries of spiked samples were 91.80%∼100.9% for Iron, 92.50%∼108.0% for Copper, 93.00%∼110.5% for Cobalt, respectively with relative standard deviation (R.S.D) of 1.1%∼12.1%. The sampling rate is 45 samples h -1 . The determination results of the food samples were in good agreement between the proposed method and ICP-AES

  9. Simultaneous Determination of Iron, Copper and Cobalt in Food Samples by CCD-diode Array Detection-Flow Injection Analysis with Partial Least Squares Calibration Model

    Energy Technology Data Exchange (ETDEWEB)

    Mi Jiaping; Li Yuanqian; Zhou Xiaoli; Zheng Bo; Zhou Ying [West China School of Public Health, Sichuan University, Chengdu, 610041 (China)

    2006-01-01

    A flow injection-CCD diode array detection spectrophotometry with partial least squares (PLS) program for simultaneous determination of iron, copper and cobalt in food samples has been established. The method was based on the chromogenic reaction of the three metal ions and 2- (5-Bromo-2-pyridylazo)-5-diethylaminophenol, 5-Br-PADAP in acetic acid - sodium acetate buffer solution (pH5) with Triton X-100 and ascorbic acid. The overlapped spectra of the colored complexes were collected by charge-coupled device (CCD) - diode array detector and the multi-wavelength absorbance data was processed using partial least squares (PLS) algorithm. Optimum reaction conditions and parameters of flow injection analysis were investigated. The samples of tea, sesame, laver, millet, cornmeal, mung bean and soybean powder were determined by the proposed method. The average recoveries of spiked samples were 91.80%{approx}100.9% for Iron, 92.50%{approx}108.0% for Copper, 93.00%{approx}110.5% for Cobalt, respectively with relative standard deviation (R.S.D) of 1.1%{approx}12.1%. The sampling rate is 45 samples h{sup -1}. The determination results of the food samples were in good agreement between the proposed method and ICP-AES.

  10. Simultaneous Determination of Iron, Copper and Cobalt in Food Samples by CCD-diode Array Detection-Flow Injection Analysis with Partial Least Squares Calibration Model

    Science.gov (United States)

    Mi, Jiaping; Li, Yuanqian; Zhou, Xiaoli; Zheng, Bo; Zhou, Ying

    2006-01-01

    A flow injection-CCD diode array detection spectrophotometry with partial least squares (PLS) program for simultaneous determination of iron, copper and cobalt in food samples has been established. The method was based on the chromogenic reaction of the three metal ions and 2- (5-Bromo-2-pyridylazo)-5-diethylaminophenol, 5-Br-PADAP in acetic acid - sodium acetate buffer solution (pH5) with Triton X-100 and ascorbic acid. The overlapped spectra of the colored complexes were collected by charge-coupled device (CCD) - diode array detector and the multi-wavelength absorbance data was processed using partial least squares (PLS) algorithm. Optimum reaction conditions and parameters of flow injection analysis were investigated. The samples of tea, sesame, laver, millet, cornmeal, mung bean and soybean powder were determined by the proposed method. The average recoveries of spiked samples were 91.80%~100.9% for Iron, 92.50%~108.0% for Copper, 93.00%~110.5% for Cobalt, respectively with relative standard deviation (R.S.D) of 1.1%~12.1%. The sampling rate is 45 samples h-1. The determination results of the food samples were in good agreement between the proposed method and ICP-AES.

  11. Application of NEA/CSNI standard problem 3 (blowdown and flow reversal in the IETA-1 rig) to the validation of the RELAP-UK Mk IV code

    International Nuclear Information System (INIS)

    Bryce, W.M.

    1977-10-01

    NEA/CSNI Standard Problem 3 consists of the modelling of an experiment on the IETI-1 rig, in which there is initially flow upwards through a feeder, heated section and riser. The inlet and outlet are then closed and a breach opened at the bottom so that the flow reverses and the rig depressurises. Calculations of this problem by many countries using several computer codes have been reported and show a wide spread of results. The purpose of the study reported here was the following. First, to show the sensitivity of the calculation of Standard Problem 3. Second, to perform an ab initio best estimate calculation using the RELAP-UK Mark IV code with the standard recommended options, and third, to use the results of the sensitivity study to show where tuning of the RELAP-UK Mark IV recommended model options was required. This study has shown that the calculation of Standard Problem 3 is sensitive to model assumptions and that the use of the loss-of-coolant accident code RELAP-UK Mk IV with the standard recommended model options predicts the experimental results very well over most of the transient. (U.K.)

  12. Measurement of cerebral blood flow the blood sampling method using 99mTc-ECD. Simultaneous scintigram scanning of arterial blood samples and the brain with a gamma camera

    International Nuclear Information System (INIS)

    Hachiya, Takenori; Inugami, Atsushi; Iida, Hidehiro; Mizuta, Yoshihiko; Kawakami, Takeshi; Inoue, Minoru

    1999-01-01

    To measure regional cerebral blood flow (rCBF) by blood sampling using 99m Tc-ECD we devised a method of measuring the radioactive concentration in arterial blood sample with a gamma camera. In this method the head and a blood sample are placed within the same visual field to record the SPECT data of both specimens simultaneously. The results of an evaluation of the counting rate performance, applying the 30 hours decaying method using 99m Tc solution showed that this method is not comparable to the well-type scintillation counter and in clinical cases the active concentration in arterial blood sample remained well within the dynamic range. In addition, examination of the influence of scattered radiation from the brain by the dilution method showed that it was negligible at a distance of more than 7.5 cm between the brain and the arterial blood sample. In the present study we placed a head-shaped phantom next to the sample. The results of the examinations suggested that this method is suitable for clinical application, and because it does not require a well-type scintillation counter, it is expected to find wide application. (author)

  13. The Liner Shipping Fleet Repositioning Problem with Cargo Flows

    DEFF Research Database (Denmark)

    Tierney, Kevin; Jensen, Rune Møller

    2012-01-01

    We solve an important problem for the liner shipping industry called the Liner Shipping Fleet Repositioning Problem (LSFRP). The LSFRP poses a large financial burden on liner shipping firms. During repositioning, vessels are moved between services in a liner shipping network. Shippers wish...

  14. Development and operation of an integrated sampling probe and gas analyzer for turbulent mixing studies in complex supersonic flows

    Science.gov (United States)

    Wiswall, John D.

    For many aerospace applications, mixing enhancement between co-flowing streams has been identified as a critical and enabling technology. Due to short fuel residence times in scramjet combustors, combustion is limited by the molecular mixing of hydrogen (fuel) and air. Determining the mixedness of fuel and air in these complex supersonic flowfields is critical to the advancement of novel injection schemes currently being developed at UTA in collaboration with NASA Langley and intended to be used on a future two-stage to orbit (~Mach 16) hypersonic air-breathing vehicle for space access. Expanding on previous work, an instrument has been designed, fabricated, and tested in order to measure mean concentrations of injected helium (a passive scalar used instead of hazardous hydrogen) and to quantitatively characterize the nature of the high-frequency concentration fluctuations encountered in the compressible, turbulent, and high-speed (up to Mach 3.5) complex flows associated with the new supersonic injection schemes. This important high-frequency data is not yet attainable when employing other techniques such as Laser Induced Fluorescence, Filtered Rayleigh Scattering or mass spectroscopy in the same complex supersonic flows. The probe operates by exploiting the difference between the thermodynamic properties of two species through independent massflow measurements and calibration. The probe samples isokinetically from the flowfield's area of interest and the helium concentration may be uniquely determined by hot-film anemometry and internally measured stagnation conditions. The final design has a diameter of 0.25" and is only 2.22" long. The overall accuracy of the probe is 3% in molar fraction of helium. The frequency response of mean concentration measurements is estimated at 103 Hz, while high-frequency hot-film measurements were conducted at 60 kHz. Additionally, the work presents an analysis of the probe's internal mixing effects and the effects of the spatial

  15. Criteria for the reliability of numerical approximations to the solution of fluid flow problems

    International Nuclear Information System (INIS)

    Foias, C.

    1986-01-01

    The numerical approximation of the solutions of fluid flows models is a difficult problem in many cases of energy research. In all numerical methods implementable on digital computers, a basic question is if the number N of elements (Galerkin modes, finite-difference cells, finite-elements, etc.) is sufficient to describe the long time behavior of the exact solutions. It was shown using several approaches that some of the estimates based on physical intuition of N are rigorously valid under very general conditions and follow directly from the mathematical theory of the Navier-Stokes equations. Among the mathematical approaches to these estimates, the most promising (which can be and was already applied to many other dissipative partial differential systems) consists in giving upper estimates to the fractal dimension of the attractor associated to one (or all) solution(s) of the respective partial differential equations. 56 refs

  16. Vibronic Boson Sampling: Generalized Gaussian Boson Sampling for Molecular Vibronic Spectra at Finite Temperature.

    Science.gov (United States)

    Huh, Joonsuk; Yung, Man-Hong

    2017-08-07

    Molecular vibroic spectroscopy, where the transitions involve non-trivial Bosonic correlation due to the Duschinsky Rotation, is strongly believed to be in a similar complexity class as Boson Sampling. At finite temperature, the problem is represented as a Boson Sampling experiment with correlated Gaussian input states. This molecular problem with temperature effect is intimately related to the various versions of Boson Sampling sharing the similar computational complexity. Here we provide a full description to this relation in the context of Gaussian Boson Sampling. We find a hierarchical structure, which illustrates the relationship among various Boson Sampling schemes. Specifically, we show that every instance of Gaussian Boson Sampling with an initial correlation can be simulated by an instance of Gaussian Boson Sampling without initial correlation, with only a polynomial overhead. Since every Gaussian state is associated with a thermal state, our result implies that every sampling problem in molecular vibronic transitions, at any temperature, can be simulated by Gaussian Boson Sampling associated with a product of vacuum modes. We refer such a generalized Gaussian Boson Sampling motivated by the molecular sampling problem as Vibronic Boson Sampling.

  17. Improvements on digital inline holographic PTV for 3D wall-bounded turbulent flow measurements

    International Nuclear Information System (INIS)

    Toloui, Mostafa; Mallery, Kevin; Hong, Jiarong

    2017-01-01

    Three-dimensional (3D) particle image velocimetry (PIV) and particle tracking velocimetry (PTV) provide the most comprehensive flow information for unraveling the physical phenomena in a wide range of fluid problems, from microfluidics to wall-bounded turbulent flows. Compared with other 3D PIV techniques, such as tomographic PIV and defocusing PIV, the digital inline holographic PTV (DIH-PTV) provides 3D flow measurement solution with high spatial resolution, low cost optical setup, and easy alignment and calibration. Despite these advantages, DIH-PTV suffers from major limitations including poor longitudinal resolution, human intervention (i.e. requirement for manually determined tuning parameters during tracer field reconstruction and extraction), limited tracer concentration, small sampling volume and expensive computations, limiting its broad use for 3D flow measurements. In this study, we present our latest developments on minimizing these challenges, which enables high-fidelity DIH-PTV implementation to larger sampling volumes with significantly higher particle seeding densities suitable for wall-bounded turbulent flow measurements. The improvements include: (1) adjustable window thresholding; (2) multi-pass 3D tracking; (3) automatic wall localization; and (4) continuity-based out-of-plane velocity component computation. The accuracy of the proposed DIH-PTV method is validated with conventional 2D PIV and double-view holographic PTV measurements in smooth-wall turbulent channel flow experiments. The capability of the technique in characterization of wall-bounded turbulence is further demonstrated through its application to flow measurements for smooth- and rough-wall turbulent channel flows. In these experiments, 3D velocity fields are measured within sampling volumes of 14.7  ×  50.0  ×  14.4 mm 3 (covering the entire depth of the channel) with a velocity resolution of  <1.1 mm/vector. Overall, the presented DIH-PTV method and

  18. Isotachophoresis system having larger-diameter channels flowing into channels with reduced diameter and with selectable counter-flow

    Energy Technology Data Exchange (ETDEWEB)

    Mariella, Jr., Raymond P.

    2018-03-06

    An isotachophoresis system for separating a sample containing particles into discrete packets including a flow channel, the flow channel having a large diameter section and a small diameter section; a negative electrode operably connected to the flow channel; a positive electrode operably connected to the flow channel; a leading carrier fluid in the flow channel; a trailing carrier fluid in the flow channel; and a control for separating the particles in the sample into discrete packets using the leading carrier fluid, the trailing carrier fluid, the large diameter section, and the small diameter section.

  19. Flow-Induced Vibration of Circular Cylindrical Structures

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Shoei-Sheng [Argonne National Lab. (ANL), Argonne, IL (United States). Components Technology Division

    1985-06-01

    Flow-induced vibration is a term to denote those phenomena associated with the response of structures placed in or conveying fluid flow. More specifically, the terra covers those cases in which an interaction develops between fluid-dynamic forces and the inertia, damping or elastic forces in the structures. The study of these phenomena draws on three disciplines: (1) structural mechanics, (2) mechanical vibration, and (3) fluid dynamics. The vibration of circular cylinders subject to flow has been known to man since ancient times; the vibration of a wire at its natural frequency in response to vortex shedding was known in ancient Greece as aeolian tones. But systematic studies of the problem were not made until a century ago when Strouhal established the relationship between vortex shedding frequency and flow velocity for a given cylinder diameter. The early research in this area has beer summarized by Zdravkovich (1985) and Goldstein (1965). Flow-induced structural vibration has been experienced in numerous fields, including the aerospace industry, power generation/transmission (turbine blades, heat exchanger tubes, nuclear reactor components), civil engineering (bridges, building, smoke stacks), and undersea technology. The problems have usually been encountered or created accidentally through improper design. In most cases, a structural or mechanical component, designed to meet specific objectives, develops problems when the undesired effects of flow field have not been accounted for in the design. When a flow-induced vibration problem is noted in the design stage, the engineer has different options to eliminate the detrimental vibration. Unfortunately, in many situations, the problems occur after the components are already in operation; the "fix" usually is very costly. Flow-induced vibration comprises complex and diverse phenomena; subcritical vibration of nuclear fuel assemblies, galloping of transmission lines, flutter of pipes conveying fluid, and whirling

  20. Numerical methods for limit problems in two-phase flow models

    International Nuclear Information System (INIS)

    Cordier, F.

    2011-01-01

    Numerical difficulties are encountered during the simulation of two-phase flows. Two issues are studied in this thesis: the simulation of phase transitions on one hand, and the simulation of both compressible and incompressible flows in the other hand. Un asymptotic study has shown that the loss of hyperbolicity of the bi fluid model was responsible for the difficulties encountered by the Roe scheme during the simulation of phase transitions. Robust and accurate polynomial schemes have thus been developed. To tackle the occasional lack of positivity of the solution, a numerical treatment based on adaptive diffusion was proposed and allowed to simulate with accuracy the test-cases of a boiling channel with creation of vapor and a tee-junction with separation of the phases. In a second part, an all-speed scheme for compressible and incompressible flows have been proposed. This pressure-based semi-implicit asymptotic preserving scheme is conservative, solves an elliptic equation on the pressure, and has been designed for general equations of state. The scheme was first developed for the full Euler equations and then extended to the Navier-Stokes equations. The good behaviour of the scheme in both compressible and incompressible regimes have been investigated. An extension of the scheme to the two-phase mixture model was implemented and demonstrated the ability of the scheme to simulate two-phase flows with phase change and a water-steam equation of state. (author) [fr

  1. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  2. Acceleration methods for multi-physics compressible flow

    Science.gov (United States)

    Peles, Oren; Turkel, Eli

    2018-04-01

    In this work we investigate the Runge-Kutta (RK)/Implicit smoother scheme as a convergence accelerator for complex multi-physics flow problems including turbulent, reactive and also two-phase flows. The flows considered are subsonic, transonic and supersonic flows in complex geometries, and also can be either steady or unsteady flows. All of these problems are considered to be a very stiff. We then introduce an acceleration method for the compressible Navier-Stokes equations. We start with the multigrid method for pure subsonic flow, including reactive flows. We then add the Rossow-Swanson-Turkel RK/Implicit smoother that enables performing all these complex flow simulations with a reasonable CFL number. We next discuss the RK/Implicit smoother for time dependent problem and also for low Mach numbers. The preconditioner includes an intrinsic low Mach number treatment inside the smoother operator. We also develop a modified Roe scheme with a corresponding flux Jacobian matrix. We then give the extension of the method for real gas and reactive flow. Reactive flows are governed by a system of inhomogeneous Navier-Stokes equations with very stiff source terms. The extension of the RK/Implicit smoother requires an approximation of the source term Jacobian. The properties of the Jacobian are very important for the stability of the method. We discuss what the chemical physics theory of chemical kinetics tells about the mathematical properties of the Jacobian matrix. We focus on the implication of the Le-Chatelier's principle on the sign of the diagonal entries of the Jacobian. We present the implementation of the method for turbulent flow. We use a two RANS turbulent model - one equation model - Spalart-Allmaras and a two-equation model - k-ω SST model. The last extension is for two-phase flows with a gas as a main phase and Eulerian representation of a dispersed particles phase (EDP). We present some examples for such flow computations inside a ballistic evaluation

  3. Physics of zonal flows

    International Nuclear Information System (INIS)

    Itoh, K.; Fujisawa, A.; Itoh, S.-I.; Yagi, M.; Nagashima, Y.; Diamond, P.H.; Tynan, G.R.; Hahm, T.S.

    2006-01-01

    Zonal flows, which means azimuthally symmetric band-like shear flows, are ubiquitous phenomena in nature and the laboratory. It is now widely recognized that zonal flows are a key constituent in virtually all cases and regimes of drift wave turbulence, indeed, so much so that this classic problem is now frequently referred to as ''drift wave-zonal flow turbulence.'' In this review, new viewpoints and unifying concepts are presented, which facilitate understanding of zonal flow physics, via theory, computation and their confrontation with the results of laboratory experiment. Special emphasis is placed on identifying avenues for further progress. (author)

  4. Physics of zonal flows

    International Nuclear Information System (INIS)

    Itoh, K.; Itoh, S.-I.; Diamond, P.H.; Hahm, T.S.; Fujisawa, A.; Tynan, G.R.; Yagi, M.; Nagashima, Y.

    2006-01-01

    Zonal flows, which means azimuthally symmetric band-like shear flows, are ubiquitous phenomena in nature and the laboratory. It is now widely recognized that zonal flows are a key constituent in virtually all cases and regimes of drift wave turbulence, indeed, so much so that this classic problem is now frequently referred to as 'drift wave-zonal flow turbulence'. In this review, new viewpoints and unifying concepts are presented, which facilitate understanding of zonal flow physics, via theory, computation and their confrontation with the results of laboratory experiment. Special emphasis is placed on identifying avenues for further progress

  5. Mixed finite element simulations in two-dimensional groundwater flow problems

    International Nuclear Information System (INIS)

    Kimura, Hideo

    1989-01-01

    A computer code of groundwater flow in two-dimensional porous media based on the mixed finite element method was developed for accurate approximations of Darcy velocities in safety evaluation of radioactive waste disposal. The mixed finite element procedure solves for both the Darcy velocities and pressure heads simultaneously in the Darcy equation and continuity equation. Numerical results of a single well pumping at a constant rate in a uniform flow field showed that the mixed finite element method gives more accurate Darcy velocities nearly 50 % on average error than standard finite element method. (author)

  6. A quasilinear model for solute transport under unsaturated flow

    International Nuclear Information System (INIS)

    Houseworth, J.E.; Leem, J.

    2009-01-01

    We developed an analytical solution for solute transport under steady-state, two-dimensional, unsaturated flow and transport conditions for the investigation of high-level radioactive waste disposal. The two-dimensional, unsaturated flow problem is treated using the quasilinear flow method for a system with homogeneous material properties. Dispersion is modeled as isotropic and is proportional to the effective hydraulic conductivity. This leads to a quasilinear form for the transport problem in terms of a scalar potential that is analogous to the Kirchhoff potential for quasilinear flow. The solutions for both flow and transport scalar potentials take the form of Fourier series. The particular solution given here is for two sources of flow, with one source containing a dissolved solute. The solution method may easily be extended, however, for any combination of flow and solute sources under steady-state conditions. The analytical results for multidimensional solute transport problems, which previously could only be solved numerically, also offer an additional way to benchmark numerical solutions. An analytical solution for two-dimensional, steady-state solute transport under unsaturated flow conditions is presented. A specific case with two sources is solved but may be generalized to any combination of sources. The analytical results complement numerical solutions, which were previously required to solve this class of problems.

  7. A micro flow cytometry system for study of marine phytoplankton from costal waters of Hong Kong

    KAUST Repository

    Yunyang Ling,

    2010-01-01

    Although conventional flow cytometers (CFCs) have been widely used for study of marine biology, most CFCs are too bulky to be used for field study in ocean and have corrosion problem due to salty samples. A new computer-controlled micro flow cytometer (MFC) system has been successfully developed using MEMS technology. We demonstrate that this new MFC can analyze mixture of two species of marine phytoplankton: Chlorella autotrophica and Rhodomonas. The results from our MFC are consistent with those from digital fluorescence microscopy. ©2010 IEEE.

  8. A Parallel Non-Overlapping Domain-Decomposition Algorithm for Compressible Fluid Flow Problems on Triangulated Domains

    Science.gov (United States)

    Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai

    1998-01-01

    This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.

  9. Acoustic Sample Deposition MALDI-MS (ASD-MALDI-MS): A Novel Process Flow for Quality Control Screening of Compound Libraries.

    Science.gov (United States)

    Chin, Jefferson; Wood, Elizabeth; Peters, Grace S; Drexler, Dieter M

    2016-02-01

    In the early stages of drug discovery, high-throughput screening (HTS) of compound libraries against pharmaceutical targets is a common method to identify potential lead molecules. For these HTS campaigns to be efficient and successful, continuous quality control of the compound collection is necessary and crucial. However, the large number of compound samples and the limited sample amount pose unique challenges. Presented here is a proof-of-concept study for a novel process flow for the quality control screening of small-molecule compound libraries that consumes only minimal amounts of samples and affords compound-specific molecular data. This process employs an acoustic sample deposition (ASD) technique for the offline sample preparation by depositing nanoliter volumes in an array format onto microscope glass slides followed by matrix-assisted laser desorption/ionization mass spectrometric (MALDI-MS) analysis. An initial study of a 384-compound array employing the ASD-MALDI-MS workflow resulted in a 75% first-pass positive identification rate with an analysis time of <1 s per sample. © 2015 Society for Laboratory Automation and Screening.

  10. ANISOTROPIC THERMAL CONDUCTION AND THE COOLING FLOW PROBLEM IN GALAXY CLUSTERS

    International Nuclear Information System (INIS)

    Parrish, Ian J.; Sharma, Prateek; Quataert, Eliot

    2009-01-01

    We examine the long-standing cooling flow problem in galaxy clusters with three-dimensional magnetohydrodynamics simulations of isolated clusters including radiative cooling and anisotropic thermal conduction along magnetic field lines. The central regions of the intracluster medium (ICM) can have cooling timescales of ∼200 Myr or shorter-in order to prevent a cooling catastrophe the ICM must be heated by some mechanism such as active galactic nucleus feedback or thermal conduction from the thermal reservoir at large radii. The cores of galaxy clusters are linearly unstable to the heat-flux-driven buoyancy instability (HBI), which significantly changes the thermodynamics of the cluster core. The HBI is a convective, buoyancy-driven instability that rearranges the magnetic field to be preferentially perpendicular to the temperature gradient. For a wide range of parameters, our simulations demonstrate that in the presence of the HBI, the effective radial thermal conductivity is reduced to ∼<10% of the full Spitzer conductivity. With this suppression of conductive heating, the cooling catastrophe occurs on a timescale comparable to the central cooling time of the cluster. Thermal conduction alone is thus unlikely to stabilize clusters with low central entropies and short central cooling timescales. High central entropy clusters have sufficiently long cooling times that conduction can help stave off the cooling catastrophe for cosmologically interesting timescales.

  11. Are atmospheric surface layer flows ergodic?

    Science.gov (United States)

    Higgins, Chad W.; Katul, Gabriel G.; Froidevaux, Martin; Simeonov, Valentin; Parlange, Marc B.

    2013-06-01

    The transposition of atmospheric turbulence statistics from the time domain, as conventionally sampled in field experiments, is explained by the so-called ergodic hypothesis. In micrometeorology, this hypothesis assumes that the time average of a measured flow variable represents an ensemble of independent realizations from similar meteorological states and boundary conditions. That is, the averaging duration must be sufficiently long to include a large number of independent realizations of the sampled flow variable so as to represent the ensemble. While the validity of the ergodic hypothesis for turbulence has been confirmed in laboratory experiments, and numerical simulations for idealized conditions, evidence for its validity in the atmospheric surface layer (ASL), especially for nonideal conditions, continues to defy experimental efforts. There is some urgency to make progress on this problem given the proliferation of tall tower scalar concentration networks aimed at constraining climate models yet are impacted by nonideal conditions at the land surface. Recent advancements in water vapor concentration lidar measurements that simultaneously sample spatial and temporal series in the ASL are used to investigate the validity of the ergodic hypothesis for the first time. It is shown that ergodicity is valid in a strict sense above uniform surfaces away from abrupt surface transitions. Surprisingly, ergodicity may be used to infer the ensemble concentration statistics of a composite grass-lake system using only water vapor concentration measurements collected above the sharp transition delineating the lake from the grass surface.

  12. Conjugate problems in convective heat transfer

    CERN Document Server

    Dorfman, Abram S

    2009-01-01

    The conjugate heat transfer (CHT) problem takes into account the thermal interaction between a body and fluid flowing over or through it, a key consideration in both mechanical and aerospace engineering. Presenting more than 100 solutions of non-isothermal and CHT problems, this title considers the approximate solutions of CHT problems.

  13. Determination of uranium in clinical and environmental samples by FIAS-ICPMS

    International Nuclear Information System (INIS)

    Karpas, Z.; Lorber, A.; Halicz, L.; Gavrieli, I.

    1998-01-01

    Uranium may enter the human body through ingestion or inhalation. Ingestion of uranium compounds through the diet, mainly drinking water, is a common occurrence, as these compounds are present in the biosphere. Inhalation of uranium-containing particles is mainly an occupational safety problem, but may also take place in areas where uranium compounds are abundant. The uranium concentration in urine samples may serve as an indication of the total uranium body content. A method based on flow injection and inductively coupled plasma mass spectrometry (FIAS-ICPMS) was found to be most suitable for determination of uranium in clinical samples (urine and serum), environmental samples (seawater, wells and carbonate rocks) and in liquids consumed by humans (drinking water and commercial beverages). Some examples of the application of the FIAS-ICPMS method are reviewed and presented here

  14. Parent-Reported Psychological and Sleep Problems in a Preschool-Aged Community Sample: Prevalence of Sleep Problems in Children with and without Emotional/Behavioural Problems

    OpenAIRE

    Salater, Julie; Røhr, Marthe

    2010-01-01

    Objective : To examine (a) the prevalence of sleep problems among 4-year-olds in the general population, (b) the prevalence of sleep problems among children with emotional and/or behavioural problems, and (c) whether specific sleep problems are associated with particular emotional/behavioural problems. Method: Using The Preschool Age Psychiatric Assessment (PAPA) , data about sleep and emotional/behavioural problems was obtained from 727 parents of 4-year-olds, recruited for a large...

  15. Boat sampling

    International Nuclear Information System (INIS)

    Citanovic, M.; Bezlaj, H.

    1994-01-01

    This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures

  16. User's guide to HEATRAN: a computer program for three-dimensional transient fluid-flow and heat-transfer analysis

    International Nuclear Information System (INIS)

    Wong, C.N.C.; Cheng, S.K.; Todreas, N.E.

    1982-01-01

    This report provides the HEATRAN user with programming and input information. HEATRAN is a computer program which is written to analyze the transient three dimensional single phase incompressible fluid flow and heat transfer problem. In this report, the programming information is given first. This information includes details concerning the code and structure. The description of the required input variables is presented next. Following the input description, the sample problems are described and HEATRAN's results are presented

  17. Batch Scheduling for Hybrid Assembly Differentiation Flow Shop to Minimize Total Actual Flow Time

    Science.gov (United States)

    Maulidya, R.; Suprayogi; Wangsaputra, R.; Halim, A. H.

    2018-03-01

    A hybrid assembly differentiation flow shop is a three-stage flow shop consisting of Machining, Assembly and Differentiation Stages and producing different types of products. In the machining stage, parts are processed in batches on different (unrelated) machines. In the assembly stage, each part of the different parts is assembled into an assembly product. Finally, the assembled products will further be processed into different types of final products in the differentiation stage. In this paper, we develop a batch scheduling model for a hybrid assembly differentiation flow shop to minimize the total actual flow time defined as the total times part spent in the shop floor from the arrival times until its due date. We also proposed a heuristic algorithm for solving the problems. The proposed algorithm is tested using a set of hypothetic data. The solution shows that the algorithm can solve the problems effectively.

  18. Content analysis in information flows

    Energy Technology Data Exchange (ETDEWEB)

    Grusho, Alexander A. [Institute of Informatics Problems of Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, Vavilova str., 44/2, Moscow (Russian Federation); Faculty of Computational Mathematics and Cybernetics, Moscow State University, Moscow (Russian Federation); Grusho, Nick A.; Timonina, Elena E. [Institute of Informatics Problems of Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, Vavilova str., 44/2, Moscow (Russian Federation)

    2016-06-08

    The paper deals with architecture of content recognition system. To analyze the problem the stochastic model of content recognition in information flows was built. We proved that under certain conditions it is possible to solve correctly a part of the problem with probability 1, viewing a finite section of the information flow. That means that good architecture consists of two steps. The first step determines correctly certain subsets of contents, while the second step may demand much more time for true decision.

  19. Linear Programming and Network Flows

    CERN Document Server

    Bazaraa, Mokhtar S; Sherali, Hanif D

    2011-01-01

    The authoritative guide to modeling and solving complex problems with linear programming-extensively revised, expanded, and updated The only book to treat both linear programming techniques and network flows under one cover, Linear Programming and Network Flows, Fourth Edition has been completely updated with the latest developments on the topic. This new edition continues to successfully emphasize modeling concepts, the design and analysis of algorithms, and implementation strategies for problems in a variety of fields, including industrial engineering, management science, operations research

  20. Automated flow cytometric analysis across large numbers of samples and cell types.

    Science.gov (United States)

    Chen, Xiaoyi; Hasan, Milena; Libri, Valentina; Urrutia, Alejandra; Beitz, Benoît; Rouilly, Vincent; Duffy, Darragh; Patin, Étienne; Chalmond, Bernard; Rogge, Lars; Quintana-Murci, Lluis; Albert, Matthew L; Schwikowski, Benno

    2015-04-01

    Multi-parametric flow cytometry is a key technology for characterization of immune cell phenotypes. However, robust high-dimensional post-analytic strategies for automated data analysis in large numbers of donors are still lacking. Here, we report a computational pipeline, called FlowGM, which minimizes operator input, is insensitive to compensation settings, and can be adapted to different analytic panels. A Gaussian Mixture Model (GMM)-based approach was utilized for initial clustering, with the number of clusters determined using Bayesian Information Criterion. Meta-clustering in a reference donor permitted automated identification of 24 cell types across four panels. Cluster labels were integrated into FCS files, thus permitting comparisons to manual gating. Cell numbers and coefficient of variation (CV) were similar between FlowGM and conventional gating for lymphocyte populations, but notably FlowGM provided improved discrimination of "hard-to-gate" monocyte and dendritic cell (DC) subsets. FlowGM thus provides rapid high-dimensional analysis of cell phenotypes and is amenable to cohort studies. Copyright © 2015. Published by Elsevier Inc.

  1. ON TECTONIC PROBLEMS OF THE OKINAWA TROUGH

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The Okinawa Trough is a very active tectonic zone at the margin of the Northwest Pacific and is typical of back-arc rifting at the young stage of tectonic evolution. Many scientists from Japan, China, Germany, France, the U.S.A. and Russia have done a lot of geologic and geophysical investigations there. It is well known that the Okinawa Trough is an active back-arc rift with extremely high heat flow, very strong hydrothermal circulation, strong volcanic and magmatic activity, frequent earthquakes, rapid subsidence and rifting, well-developed fault and central graben. But up to now, there are still some important tectonic problems about the Okinawa Trough that require clarification on some aspects such as the type of its crust, its forming time, its tectonic evolution, the distribution of its central grabens, the relationship between its high heat flow and tectonic activity. Based on the data obtained from seismic survey, geomagnetic and gravity measurements, submarine sampling and heat flow measurements in the last 15 years, the author discusses the following tectonic problems about the Okinawa Trough: (1) If the Okinawa Trough develops oceanic crust or not. (2) Is the South Okinawa Trough tectonically more active than the North Okinawa Trough with shallower water and few investigation data on it. (3) The formation time of the Okinawa Trough and its tectonic evolution. The Okinawa Trough has a very thin continental crust. Up to now, there is no evidence of oceanic crust in the Okinawa Trough. The North, Middle and South Okinawa Trough are all very strongly active areas. From 6 Ma B.P., the Okinawa Trough began to form. Since 2 Ma, the Okinawa Trough has been very active.

  2. solution of confined seepage problems below hydraulic structures

    African Journals Online (AJOL)

    user

    1985-09-01

    Sep 1, 1985 ... boundaries are used for solving the seepage problem beneath practical profiles of ... 1. INTRODUCTION. The study of flow through porous media has a wide range of .... free surface flow [3, 4, 5] and unconfined flow situations ...

  3. Microwave heating of aqueous samples on a micro-optical-electro-mechanical system

    Science.gov (United States)

    Beer, Neil Reginald

    2015-03-03

    Apparatus for heating a sample includes a microchip; a microchannel flow channel in the microchip, the microchannel flow channel containing the sample; a microwave source that directs microwaves onto the sample for heating the sample; a wall section of the microchannel flow channel that receives the microwaves and enables the microwaves to pass through wall section of the microchannel flow channel, the wall section the microchannel flow channel being made of a material that is not appreciably heated by the microwaves; a carrier fluid within the microchannel flow channel for moving the sample in the microchannel flow channel, the carrier fluid being made of a material that is not appreciably heated by the microwaves; wherein the microwaves pass through wall section of the microchannel flow channel and heat the sample.

  4. Assessment of fluid distribution and flow properties in two phase fluid flow using X-ray CT technology

    Science.gov (United States)

    Jiang, Lanlan; Wu, Bohao; Li, Xingbo; Wang, Sijia; Wang, Dayong; Zhou, Xinhuan; Zhang, Yi

    2018-04-01

    To study on microscale distribution of CO2 and brine during two-phase flow is crucial for understanding the trapping mechanisms of CO2 storage. In this study, CO2-brine flow experiments in porous media were conducted using X-ray computed tomography. The porous media were packed with glass beads. The pore structure (porosity/tortuosity) and flow properties at different flow rates and flow fractions were investigated. The results showed that porosity of the packed beads differed at different position as a result of heterogeneity. The CO2 saturation is higher at low injection flow rates and high CO2 fractions. CO2 distribution at the pore scale was also visualized. ∅ Porosity of porous media CT brine_ sat grey value of sample saturated with brine CT dry grey value of sample saturated with air CT brine grey value of pure brine CT air grey value of pure air CT flow grey values of sample with two fluids occupying the pore space {CT}_{CO_2_ sat} grey value of sample saturated with CO2 {f}_{CO_2}({S}_{CO_2}) CO2 fraction {q}_{CO_2} the volume flow rate for CO2 q brine the volume flow rate for brine L Thickness of the porous media, mm L e a bundle of capillaries of equal length, mm τ Tortuosity, calculated from L e / L.

  5. Integer batch scheduling problems for a single-machine with simultaneous effect of learning and forgetting to minimize total actual flow time

    Directory of Open Access Journals (Sweden)

    Rinto Yusriski

    2015-09-01

    Full Text Available This research discusses an integer batch scheduling problems for a single-machine with position-dependent batch processing time due to the simultaneous effect of learning and forgetting. The decision variables are the number of batches, batch sizes, and the sequence of the resulting batches. The objective is to minimize total actual flow time, defined as total interval time between the arrival times of parts in all respective batches and their common due date. There are two proposed algorithms to solve the problems. The first is developed by using the Integer Composition method, and it produces an optimal solution. Since the problems can be solved by the first algorithm in a worst-case time complexity O(n2n-1, this research proposes the second algorithm. It is a heuristic algorithm based on the Lagrange Relaxation method. Numerical experiments show that the heuristic algorithm gives outstanding results.

  6. Estimation of distribution algorithm with path relinking for the blocking flow-shop scheduling problem

    Science.gov (United States)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2018-05-01

    This article presents an effective estimation of distribution algorithm, named P-EDA, to solve the blocking flow-shop scheduling problem (BFSP) with the makespan criterion. In the P-EDA, a Nawaz-Enscore-Ham (NEH)-based heuristic and the random method are combined to generate the initial population. Based on several superior individuals provided by a modified linear rank selection, a probabilistic model is constructed to describe the probabilistic distribution of the promising solution space. The path relinking technique is incorporated into EDA to avoid blindness of the search and improve the convergence property. A modified referenced local search is designed to enhance the local exploitation. Moreover, a diversity-maintaining scheme is introduced into EDA to avoid deterioration of the population. Finally, the parameters of the proposed P-EDA are calibrated using a design of experiments approach. Simulation results and comparisons with some well-performing algorithms demonstrate the effectiveness of the P-EDA for solving BFSP.

  7. Understanding consumption-related sucralose emissions - A conceptual approach combining substance-flow analysis with sampling analysis

    Energy Technology Data Exchange (ETDEWEB)

    Neset, Tina-Simone Schmid, E-mail: tina.schmid.neset@liu.se [Department of Water and Environmental Studies, Linkoeping University, SE-58183 Linkoeping (Sweden); Singer, Heinz; Longree, Philipp; Bader, Hans-Peter; Scheidegger, Ruth; Wittmer, Anita; Andersson, Jafet Clas Martin [Eawag, Swiss Federal Institute of Aquatic Science and Technology, Ueberlandstrasse 133, CH-8600 Duebendorf (Switzerland)

    2010-07-15

    This paper explores the potential of combining substance-flow modelling with water and wastewater sampling to trace consumption-related substances emitted through the urban wastewater. The method is exemplified on sucralose. Sucralose is a chemical sweetener that is 600 times sweeter than sucrose and has been on the European market since 2004. As a food additive, sucralose has recently increased in usage in a number of foods, such as soft drinks, dairy products, candy and several dietary products. In a field campaign, sucralose concentrations were measured in the inflow and outflow of the local wastewater treatment plant in Linkoeping, Sweden, as well as upstream and downstream of the receiving stream and in Lake Roxen. This allows the loads emitted from the city to be estimated. A method consisting of solid-phase extraction followed by liquid chromatography and high resolution mass spectrometry was used to quantify the sucralose in the collected surface and wastewater samples. To identify and quantify the sucralose sources, a consumption analysis of households including small business enterprises was conducted as well as an estimation of the emissions from the local food industry. The application of a simple model including uncertainty and sensitivity analysis indicates that at present not one large source but rather several small sources contribute to the load coming from households, small business enterprises and industry. This is in contrast to the consumption pattern seen two years earlier, which was dominated by one product. The inflow to the wastewater treatment plant decreased significantly from other measurements made two years earlier. The study shows that the combination of substance-flow modelling with the analysis of the loads to the receiving waters helps us to understand consumption-related emissions.

  8. Understanding consumption-related sucralose emissions - A conceptual approach combining substance-flow analysis with sampling analysis

    International Nuclear Information System (INIS)

    Neset, Tina-Simone Schmid; Singer, Heinz; Longree, Philipp; Bader, Hans-Peter; Scheidegger, Ruth; Wittmer, Anita; Andersson, Jafet Clas Martin

    2010-01-01

    This paper explores the potential of combining substance-flow modelling with water and wastewater sampling to trace consumption-related substances emitted through the urban wastewater. The method is exemplified on sucralose. Sucralose is a chemical sweetener that is 600 times sweeter than sucrose and has been on the European market since 2004. As a food additive, sucralose has recently increased in usage in a number of foods, such as soft drinks, dairy products, candy and several dietary products. In a field campaign, sucralose concentrations were measured in the inflow and outflow of the local wastewater treatment plant in Linkoeping, Sweden, as well as upstream and downstream of the receiving stream and in Lake Roxen. This allows the loads emitted from the city to be estimated. A method consisting of solid-phase extraction followed by liquid chromatography and high resolution mass spectrometry was used to quantify the sucralose in the collected surface and wastewater samples. To identify and quantify the sucralose sources, a consumption analysis of households including small business enterprises was conducted as well as an estimation of the emissions from the local food industry. The application of a simple model including uncertainty and sensitivity analysis indicates that at present not one large source but rather several small sources contribute to the load coming from households, small business enterprises and industry. This is in contrast to the consumption pattern seen two years earlier, which was dominated by one product. The inflow to the wastewater treatment plant decreased significantly from other measurements made two years earlier. The study shows that the combination of substance-flow modelling with the analysis of the loads to the receiving waters helps us to understand consumption-related emissions.

  9. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    Energy Technology Data Exchange (ETDEWEB)

    Agaltsov, A. D., E-mail: agalets@gmail.com [Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, 119991 Moscow (Russian Federation); Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr [CNRS (UMR 7641), Centre de Mathématiques Appliquées, Ecole Polytechnique, 91128 Palaiseau (France); IEPT RAS, 117997 Moscow (Russian Federation); Moscow Institute of Physics and Technology, Dolgoprudny (Russian Federation)

    2014-10-15

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.

  10. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    International Nuclear Information System (INIS)

    Agaltsov, A. D.; Novikov, R. G.

    2014-01-01

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given

  11. Development of a simple extraction cell with bi-directional continuous flow coupled on-line to ICP-MS for assessment of elemental associations in solid samples

    DEFF Research Database (Denmark)

    Buanuam, Janya; Tiptanasup, Kasipa; Shiowatana, Juwadee

    2006-01-01

    A continuous-flow system comprising a novel, custom-built extraction module and hyphenated with inductively coupled plasma-mass spectrometric (ICP-MS) detection is proposed for assessing metal mobilities and geochemical associations in soil compartments as based on using the three step BCR (now...... the Measurements and Testing Programme of the European Commission) sequential extraction scheme. Employing a peristaltic pump as liquid driver, alternate directional flows of the extractants are used to overcome compression of the solid particles within the extraction unit to ensure a steady partitioning flow rate...... and thus to maintain constant operationally defined extraction conditions. The proposed flow set-up is proven to allow for trouble-free handling of soil samples up to 1 g and flow rates ≤ 10 mL min–1. The miniaturized extraction system was coupled to ICP-MS through a flow injection interface in order...

  12. An extended continuous estimation of distribution algorithm for solving the permutation flow-shop scheduling problem

    Science.gov (United States)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2017-11-01

    This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.

  13. Evaluation of flow hood measurements for residential register flows; TOPICAL

    International Nuclear Information System (INIS)

    Walker, I.S.; Wray, C.P.; Dickerhoff, D.J.; Sherman, M.H.

    2001-01-01

    Flow measurement at residential registers using flow hoods is becoming more common. These measurements are used to determine if the HVAC system is providing adequate comfort, appropriate flow over heat exchangers and in estimates of system energy losses. These HVAC system performance metrics are determined by using register measurements to find out if individual rooms are getting the correct airflow, and in estimates of total air handler flow and duct air leakage. The work discussed in this paper shows that commercially available flow hoods are poor at measuring flows in residential systems. There is also evidence in this and other studies that flow hoods can have significant errors even when used on the non-residential systems they were originally developed for. The measurement uncertainties arise from poor calibrations and the sensitivity of exiting flow hoods to non-uniformity of flows entering the device. The errors are usually large-on the order of 20% of measured flow, which is unacceptably high for most applications. Active flow hoods that have flow measurement devices that are insensitive to the entering airflow pattern were found to be clearly superior to commercially available flow hoods. In addition, it is clear that current calibration procedures for flow hoods may not take into account any field application problems and a new flow hood measurement standard should be developed to address this issue

  14. Performance analysis of flow lines with non-linear flow of material

    CERN Document Server

    Helber, Stefan

    1999-01-01

    Flow line design is one of the major tasks in production management. The decision to install a set of machines and buffers is often highly irreversible. It determines both cost and revenue to a large extent. In order to assess the economic impact of any possible flow line design, production rates and inventory levels have to be estimated. These performance measures depend on the allocation of buffers whenever the flow of material is occasionally disrupted, for example due to machine failures or quality problems. The book describes analytical methods that can be used to evaluate flow lines much faster than with simulation techniques. Based on these fast analytical techniques, it is possible to determine a flow line design that maximizes the net present value of the flow line investment. The flow of material through the line may be non-linear, for example due to assembly operations or quality inspections.

  15. Effect of selective logging on genetic diversity and gene flow in Cariniana legalis sampled from a cacao agroforestry system.

    Science.gov (United States)

    Leal, J B; Santos, R P; Gaiotto, F A

    2014-01-28

    The fragments of the Atlantic Forest of southern Bahia have a long history of intense logging and selective cutting. Some tree species, such as jequitibá rosa (Cariniana legalis), have experienced a reduction in their populations with respect to both area and density. To evaluate the possible effects of selective logging on genetic diversity, gene flow, and spatial genetic structure, 51 C. legalis individuals were sampled, representing the total remaining population from the cacao agroforestry system. A total of 120 alleles were observed from the 11 microsatellite loci analyzed. The average observed heterozygosity (0.486) was less than the expected heterozygosity (0.721), indicating a loss of genetic diversity in this population. A high fixation index (FIS = 0.325) was found, which is possibly due to a reduction in population size, resulting in increased mating among relatives. The maximum (1055 m) and minimum (0.095 m) distances traveled by pollen or seeds were inferred based on paternity tests. We found 36.84% of unique parents among all sampled seedlings. The progenitors of the remaining seedlings (63.16%) were most likely out of the sampled area. Positive and significant spatial genetic structure was identified in this population among classes 10 to 30 m away with an average coancestry coefficient between pairs of individuals of 0.12. These results suggest that the agroforestry system of cacao cultivation is contributing to maintaining levels of diversity and gene flow in the studied population, thus minimizing the effects of selective logging.

  16. Online preconcentration and determination of trace levels cadmium in water samples using flow injection systems coupled with flame AAS

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Songlin; Liang, Huading; Yan, Hua; Yan, Zhengzhong; Chen, Suqing; Zhu, Xiandi; Cheng, Miaoxian [School of Pharmaceutical and Chemical Engineering, Taizhou University (China)

    2010-02-15

    A rapid and sensitive method for the determination of trace levels cadmium in water samples by flame atomic absorption spectrometry was developed. It is based on the online sorption of Cd(II) ions on a microcolumn packed with HCl treated bamboo charcoal. In a pH range of 5.0-7.5, Cd(II) ions were effectively retained on the microcolumn, which exhibited fast kinetics, permitting the use of high sample flow rates up to at least 12.8 mL/min without the loss of retention efficiency. The retained Cd(II) ions were quantitatively eluted with HCl (2.0 mol/L) for an online determination. With a preconcentration time of 80 s at a sample loading flow rate of 8.6 mL/min, a sensitivity enhancement factor of 63 was obtained compared with the slope of the linear portion of the calibration curves before and after preconcentration. The calibration graph using the preconcentration system for cadmium was linear with a correlation coefficient of 0.9997, at levels from 1-40 ng/mL. The precision (RSD) for 11 replicate measurements were 3.2% for the determination of 5 ng/mL Cd(II) and 1.8% for 20 ng/mL Cd(II), respectively, and the detection limit (3s) was 0.36 ng/mL. The accuracy was assessed through the determination of a certified reference material, and also through recovery experiments. (Abstract Copyright [2010], Wiley Periodicals, Inc.)

  17. The effect of ultrasound on arterial blood flow: 1. Steady fully developed flow

    International Nuclear Information System (INIS)

    Bestman, A.R.

    1990-12-01

    The paper models the effects of ultrasound heating of the tissues and the resultant perturbation on blood flow in the arteries and veins. It is assumed that the blood vessel is rigid and the undisturbed flow is fully developed. Acoustical perturbation on this Poiseuille flow, for the general three-dimensional flow with heat transfer in an infinitely long pipe is considered. Closed form analytical solutions are obtained to the problem. It is discovered that the effects of the ultrasound heating are concentrated at the walls of the blood vessels. (author). 4 refs

  18. Upscaling of Forchheimer flows

    KAUST Repository

    Aulisa, Eugenio

    2014-08-01

    In this work we propose upscaling method for nonlinear Forchheimer flow in heterogeneous porous media. The generalized Forchheimer law is considered for incompressible and slightly-compressible single-phase flows. We use recently developed analytical results (Aulisa et al., 2009) [1] and formulate the resulting system in terms of a degenerate nonlinear flow equation for the pressure with the nonlinearity depending on the pressure gradient. The coarse scale parameters for the steady state problem are determined so that the volumetric average of velocity of the flow in the domain on fine scale and on coarse scale are close. A flow-based coarsening approach is used, where the equivalent permeability tensor is first evaluated following streamline methods for linear cases, and modified in order to take into account the nonlinear effects. Compared to previous works (Garibotti and Peszynska, 2009) [2], (Durlofsky and Karimi-Fard) [3], this approach can be combined with rigorous mathematical upscaling theory for monotone operators, (Efendiev et al., 2004) [4], using our recent theoretical results (Aulisa et al., 2009) [1]. The developed upscaling algorithm for nonlinear steady state problems is effectively used for variety of heterogeneities in the domain of computation. Direct numerical computations for average velocity and productivity index justify the usage of the coarse scale parameters obtained for the special steady state case in the fully transient problem. For nonlinear case analytical upscaling formulas in stratified domain are obtained. Numerical results were compared to these analytical formulas and proved to be highly accurate. © 2014.

  19. Computer program for compressible flow network analysis

    Science.gov (United States)

    Wilton, M. E.; Murtaugh, J. P.

    1973-01-01

    Program solves problem of an arbitrarily connected one dimensional compressible flow network with pumping in the channels and momentum balancing at flow junctions. Program includes pressure drop calculations for impingement flow and flow through pin fin arrangements, as currently found in many air cooled turbine bucket and vane cooling configurations.

  20. Multiphase flow problems on thermofluid safety for fusion reactors

    International Nuclear Information System (INIS)

    Takase, Kazuyuki

    2003-01-01

    As the thermofluid safety study for the International Thermonuclear Experimental Reactor (ITER), thermal-hydraulic characteristics of Tokamak fusion reactors under transient events were investigated experimentally and analyzed numerically. As severe transient events an ingress-of-coolant event (ICE) and a loss-of-vacuum event (LOVA) were considered. An integrated ICE test facility was constructed to demonstrate that the ITER safety design approach and parameters are adequate. Water-vapor two-phase flow behavior and performance of the ITER pressure suppression system during the ICE were clarified by the integrated ICE experiments. The TRAC was modified to specify the two-phase flow behavior under the ICE. The ICE experimental results were verified using the modified TRAC code. On the other hand, activated dust mobilization and air ingress characteristics in the ITER vacuum vessel during the LOVA were analyzed using a newly developed analysis code. Some physical models on the motion of dust were considered. The rate of dust released from the vacuum vessel through breaches to the outside was characterized quantitatively. The predicted average pressures in the vacuum vessel during the LOVA were in good agreement with the experimental results. Moreover, direct-contact condensation characteristics between water and vapor inside the ITER suppression tank were observed visually and simulated by the direct two-phase flow analysis. Furthermore, chemical reaction characteristics between vapor and ITER plasma-facing component materials were predicted numerically in order to obtain qualitative estimation on generation of inflammable gases such as hydrogen and methane. The experimental and numerical results of the present studies were reflected in the ITER thermofluid safety design. (author)