WorldWideScience

Sample records for flow sensitivity analysis

  1. Sensitivity analysis of time-dependent laminar flows

    International Nuclear Information System (INIS)

    Hristova, H.; Etienne, S.; Pelletier, D.; Borggaard, J.

    2004-01-01

    This paper presents a general sensitivity equation method (SEM) for time dependent incompressible laminar flows. The SEM accounts for complex parameter dependence and is suitable for a wide range of problems. The formulation is verified on a problem with a closed form solution obtained by the method of manufactured solution. Systematic grid convergence studies confirm the theoretical rates of convergence in both space and time. The methodology is then applied to pulsatile flow around a square cylinder. Computations show that the flow starts with symmetrical vortex shedding followed by a transition to the traditional Von Karman street (alternate vortex shedding). Simulations show that the transition phase manifests itself earlier in the sensitivity fields than in the flow field itself. Sensitivities are then demonstrated for fast evaluation of nearby flows and uncertainty analysis. (author)

  2. Extended forward sensitivity analysis of one-dimensional isothermal flow

    International Nuclear Information System (INIS)

    Johnson, M.; Zhao, H.

    2013-01-01

    Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)

  3. Deterministic sensitivity analysis of two-phase flow systems: forward and adjoint methods. Final report

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1984-07-01

    This report presents a self-contained mathematical formalism for deterministic sensitivity analysis of two-phase flow systems, a detailed application to sensitivity analysis of the homogeneous equilibrium model of two-phase flow, and a representative application to sensitivity analysis of a model (simulating pump-trip-type accidents in BWRs) where a transition between single phase and two phase occurs. The rigor and generality of this sensitivity analysis formalism stem from the use of Gateaux (G-) differentials. This report highlights the major aspects of deterministic (forward and adjoint) sensitivity analysis, including derivation of the forward sensitivity equations, derivation of sensitivity expressions in terms of adjoint functions, explicit construction of the adjoint system satisfied by these adjoint functions, determination of the characteristics of this adjoint system, and demonstration that these characteristics are the same as those of the original quasilinear two-phase flow equations. This proves that whenever the original two-phase flow problem is solvable, the adjoint system is also solvable and, in principle, the same numerical methods can be used to solve both the original and adjoint equations

  4. Sensitivity Analysis of Unsteady Flow Fields and Impact of Measurement Strategy

    Directory of Open Access Journals (Sweden)

    Takashi Misaka

    2014-01-01

    Full Text Available Difficulty of data assimilation arises from a large difference between the sizes of a state vector to be determined, that is, the number of spatiotemporal mesh points of a discretized numerical model and a measurement vector, that is, the amount of measurement data. Flow variables on a large number of mesh points are hardly defined by spatiotemporally limited measurements, which poses an underdetermined problem. In this study we conduct the sensitivity analysis of two- and three-dimensional vortical flow fields within a framework of data assimilation. The impact of measurement strategy, which is evaluated by the sensitivity of the 4D-Var cost function with respect to measurements, is investigated to effectively determine a flow field by limited measurements. The assimilation experiment shows that the error defined by the difference between the reference and assimilated flow fields is reduced by using the sensitivity information to locate the limited number of measurement points. To conduct data assimilation for a long time period, the 4D-Var data assimilation and the sensitivity analysis are repeated with a short assimilation window.

  5. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    Science.gov (United States)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  6. Sensitivity Analysis for Steady State Groundwater Flow Using Adjoint Operators

    Science.gov (United States)

    Sykes, J. F.; Wilson, J. L.; Andrews, R. W.

    1985-03-01

    Adjoint sensitivity theory is currently being considered as a potential method for calculating the sensitivity of nuclear waste repository performance measures to the parameters of the system. For groundwater flow systems, performance measures of interest include piezometric heads in the vicinity of a waste site, velocities or travel time in aquifers, and mass discharge to biosphere points. The parameters include recharge-discharge rates, prescribed boundary heads or fluxes, formation thicknesses, and hydraulic conductivities. The derivative of a performance measure with respect to the system parameters is usually taken as a measure of sensitivity. To calculate sensitivities, adjoint sensitivity equations are formulated from the equations describing the primary problem. The solution of the primary problem and the adjoint sensitivity problem enables the determination of all of the required derivatives and hence related sensitivity coefficients. In this study, adjoint sensitivity theory is developed for equations of two-dimensional steady state flow in a confined aquifer. Both the primary flow equation and the adjoint sensitivity equation are solved using the Galerkin finite element method. The developed computer code is used to investigate the regional flow parameters of the Leadville Formation of the Paradox Basin in Utah. The results illustrate the sensitivity of calculated local heads to the boundary conditions. Alternatively, local velocity related performance measures are more sensitive to hydraulic conductivities.

  7. Transient simulation and sensitivity analysis for transport of radionuclides in a saturated-unsaturated groundwater flow system

    International Nuclear Information System (INIS)

    Chen, H.H.

    1980-01-01

    Radionuclide transport by groundwater flow is an important pathway in the assessment of the environmental impact of radioactive waste disposal to the biosphere. A numerical model was developed to simulate radionuclide transport by groundwater flow and predict the radionuclide discharge rate to the biosphere. A sensitivity analysis methodology was developed to address the sensitivity of the input parameters of the radionuclide transport equation to the specified response of interest

  8. Sensitivity and uncertainty analysis

    CERN Document Server

    Cacuci, Dan G; Navon, Ionel Michael

    2005-01-01

    As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c

  9. Sensitivity Analysis of Transonic Flow over J-78 Wings

    Directory of Open Access Journals (Sweden)

    Alexander Kuzmin

    2015-01-01

    Full Text Available 3D transonic flow over swept and unswept wings with an J-78 airfoil at spanwise sections is studied numerically at negative and vanishing angles of attack. Solutions of the unsteady Reynolds-averaged Navier-Stokes equations are obtained with a finite-volume solver on unstructured meshes. The numerical simulation shows that adverse Mach numbers, at which the lift coefficient is highly sensitive to small perturbations, are larger than those obtained earlier for 2D flow. Due to the larger Mach numbers, there is an onset of self-exciting oscillations of shock waves on the wings. The swept wing exhibits a higher sensitivity to variations of the Mach number than the unswept one.

  10. Atomistic Galois insertions for flow sensitive integrity

    DEFF Research Database (Denmark)

    Nielson, Flemming; Nielson, Hanne Riis

    2017-01-01

    Several program verification techniques assist in showing that software adheres to the required security policies. Such policies may be sensitive to the flow of execution and the verification may be supported by combinations of type systems and Hoare logics. However, this requires user assistance...... and to obtain full automation we shall explore the over-approximating nature of static analysis. We demonstrate that the use of atomistic Galois insertions constitutes a stable framework in which to obtain sound and fully automatic enforcement of flow sensitive integrity. The framework is illustrated...

  11. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    Science.gov (United States)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  12. Sensitivity Analysis of Unsaturated Flow and Contaminant Transport with Correlated Parameters

    Science.gov (United States)

    Relative contributions from uncertainties in input parameters to the predictive uncertainties in unsaturated flow and contaminant transport are investigated in this study. The objectives are to: (1) examine the effects of input parameter correlations on the sensitivity of unsaturated flow and conta...

  13. Stability and sensitivity analysis of hypersonic flow past a blunt cone

    Science.gov (United States)

    Nichols, Joseph W.; Cook, David; Brock, Joseph M.; Candler, Graham V.

    2017-11-01

    We investigate the effects of nosetip bluntness and low-level distributed roughness on instabilities leading to transition on a 7 degree half-angle blunt cone at Mach 10. To study the sensitivity of boundary layer instabilities to bluntness and roughness, we numerically extract Jacobian matrices directly from the unstructured hypersonic flow solver US3D. These matrices govern the dynamics of small perturbations about otherwise laminar base flows. We consider the frequency response of the resulting linearized dynamical system between different input and output locations along the cone, including close to the nosetip. Using adjoints, our method faithfully captures effects of complex geometry such as strong curvature and roughness that lead to flow acceleration and localized heating in this region. These effects violate the assumption of a slowly-varying base flow that underpins traditional linear stability analyses. We compare our results, which do not rely upon this assumption, to experimental measurements of a Mach 10 blunt cone taken at the AEDC Hypervelocity Ballistic Range G facility. In particular, we assess whether effects of complex geometry can explain discrepancies previously noted between traditional stability analysis and observations. This work is supported by the Office of Naval Research through Grant Number N00014-17-1-2496.

  14. Financial Development and Investment-Cash Flow Sensitivity

    Directory of Open Access Journals (Sweden)

    Jungwon Suh

    2007-06-01

    Full Text Available Using firm-level data from thirty-five countries around the world, this paper empirically examines whether investment-cash flow sensitivity reflects financial constraints. Recent US studies have raised questions on the prediction that investment-cash flow sensitivity is a measure of financial constraints. Looking at thirty-five countries with varying degrees of financial development, this study tests whether investment-cash flow sensitivity is in fact related to financial constraints. In most countries, the evidence supporting the argument that firms likely facing financially constraints display high investment-cash flow sensitivity is weak. Moreover, the evidence that firms in the absence of developed financial markets display high investment-cash flow sensitivity is also weak. Overall, the results from this international investigation do not support the prediction that investment-cash flow sensitivity reflects financial constraints.

  15. A Flow-Sensitive Analysis of Privacy Properties

    DEFF Research Database (Denmark)

    Nielson, Hanne Riis; Nielson, Flemming

    2007-01-01

    that information I send to some service never is leaked to another service? - unless I give my permission? We shall develop a static program analysis for the pi- calculus and show how it can be used to give privacy guarantees like the ones requested above. The analysis records the explicit information flow...

  16. CASH-FLOW SENSITIVITY TO PAYMENTS FOR MATERIAL RESSOURCES

    Directory of Open Access Journals (Sweden)

    Lavinia Elena BRÎNDESCU OLARIU

    2014-12-01

    Full Text Available The financing decision is taken based on the expectations concerning the future cash-flows generated in the operating activity, which should provide coverage for the debt service and allow for an increase of the shareholders’ wealth. Still, the future cash-flows are affected by risk, which makes the sensitivity analysis a very important part of the decision process. The current research sets to evaluate the sensitivity of the payment capacity to variations of the payments for raw materials and consumables. The study employs 391 forecasted yearly cash-flow statements collected from 50 companies together with detailed information concerning the hypotheses of the forecasts. The results of the study allow for the establishment of benchmarks for the payment capacity’s sensitivity, the determination of the mechanisms through which the variation of payments for raw materials and consumables impacts the payment capacity, as well as the identification of the possible causes of such a variation.

  17. FLOCK cluster analysis of mast cell event clustering by high-sensitivity flow cytometry predicts systemic mastocytosis.

    Science.gov (United States)

    Dorfman, David M; LaPlante, Charlotte D; Pozdnyakova, Olga; Li, Betty

    2015-11-01

    In our high-sensitivity flow cytometric approach for systemic mastocytosis (SM), we identified mast cell event clustering as a new diagnostic criterion for the disease. To objectively characterize mast cell gated event distributions, we performed cluster analysis using FLOCK, a computational approach to identify cell subsets in multidimensional flow cytometry data in an unbiased, automated fashion. FLOCK identified discrete mast cell populations in most cases of SM (56/75 [75%]) but only a minority of non-SM cases (17/124 [14%]). FLOCK-identified mast cell populations accounted for 2.46% of total cells on average in SM cases and 0.09% of total cells on average in non-SM cases (P < .0001) and were predictive of SM, with a sensitivity of 75%, a specificity of 86%, a positive predictive value of 76%, and a negative predictive value of 85%. FLOCK analysis provides useful diagnostic information for evaluating patients with suspected SM, and may be useful for the analysis of other hematopoietic neoplasms. Copyright© by the American Society for Clinical Pathology.

  18. Visualization and evaluation of flow during water filtration: Parameterization and sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Bílek Petr

    2016-01-01

    Full Text Available This paper deals with visualization and evaluation of flow during filtration of water seeded by artificial microscopic particles. Planar laser induced fluorescence (PLIF is a wide spread method for visualization and non-invasive characterization of flow. However the method uses fluorescent dyes or fluorescent particles in special cases. In this article the flow is seeded by non-fluorescent monodisperse polystyrene particles with the diameter smaller than one micrometer. The monodisperse sub-micron particles are very suitable for testing of textile filtration materials. Nevertheless non-fluorescent particles are not useful for PLIF method. A water filtration setup with an optical access to the place, were a tested filter is mounted, was built and used for the experiments. Concentration of particles in front of and behind the tested filter in a laser light sheet measured is and the local filtration efficiency expressed is. The article describes further progress in the measurement. It was carried out sensitivity analysis, parameterization and performance of the method during several simulations and experiments.

  19. Computing the sensitivity of drag and lift in flow past a circular cylinder: Time-stepping versus self-consistent analysis

    Science.gov (United States)

    Meliga, Philippe

    2017-07-01

    We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to

  20. A sensitivity analysis of the mass balance equation terms in subcooled flow boiling

    International Nuclear Information System (INIS)

    Braz Filho, Francisco A.; Caldeira, Alexandre D.; Borges, Eduardo M.

    2013-01-01

    In a heated vertical channel, the subcooled flow boiling occurs when the fluid temperature reaches the saturation point, actually a small overheating, near the channel wall while the bulk fluid temperature is below this point. In this case, vapor bubbles are generated along the channel resulting in a significant increase in the heat flux between the wall and the fluid. This study is particularly important to the thermal-hydraulics analysis of Pressurized Water Reactors (PWRs). The computational fluid dynamics software FLUENT uses the Eulerian multiphase model to analyze the subcooled flow boiling. In a previous paper, the comparison of the FLUENT results with experimental data for the void fraction presented a good agreement, both at the beginning of boiling as in nucleate boiling at the end of the channel. In the region between these two points the comparison with experimental data was not so good. Thus, a sensitivity analysis of the mass balance equation terms, steam production and condensation, was performed. Factors applied to the terms mentioned above can improve the agreement of the FLUENT results to the experimental data. Void fraction calculations show satisfactory results in relation to the experimental data in pressures values of 15, 30 and 45 bars. (author)

  1. Density-based global sensitivity analysis of sheet-flow travel time: Kinematic wave-based formulations

    Science.gov (United States)

    Hosseini, Seiyed Mossa; Ataie-Ashtiani, Behzad; Simmons, Craig T.

    2018-04-01

    Despite advancements in developing physics-based formulations to estimate the sheet-flow travel time (tSHF), the quantification of the relative impacts of influential parameters on tSHF has not previously been considered. In this study, a brief review of the physics-based formulations to estimate tSHF including kinematic wave (K-W) theory in combination with Manning's roughness (K-M) and with Darcy-Weisbach friction formula (K-D) over single and multiple planes is provided. Then, the relative significance of input parameters to the developed approaches is quantified by a density-based global sensitivity analysis (GSA). The performance of K-M considering zero-upstream and uniform flow depth (so-called K-M1 and K-M2), and K-D formulae to estimate the tSHF over single plane surface were assessed using several sets of experimental data collected from the previous studies. The compatibility of the developed models to estimate tSHF over multiple planes considering temporal rainfall distributions of Natural Resources Conservation Service, NRCS (I, Ia, II, and III) are scrutinized by several real-world examples. The results obtained demonstrated that the main controlling parameters of tSHF through K-D and K-M formulae are the length of surface plane (mean sensitivity index T̂i = 0.72) and flow resistance (mean T̂i = 0.52), respectively. Conversely, the flow temperature and initial abstraction ratio of rainfall have the lowest influence on tSHF (mean T̂i is 0.11 and 0.12, respectively). The significant role of the flow regime on the estimation of tSHF over a single and a cascade of planes are also demonstrated. Results reveal that the K-D formulation provides more precise tSHF over the single plane surface with an average percentage of error, APE equal to 9.23% (the APE for K-M1 and K-M2 formulae were 13.8%, and 36.33%, respectively). The superiority of Manning-jointed formulae in estimation of tSHF is due to the incorporation of effects from different flow regimes as

  2. Robustness analysis of complex networks with power decentralization strategy via flow-sensitive centrality against cascading failures

    Science.gov (United States)

    Guo, Wenzhang; Wang, Hao; Wu, Zhengping

    2018-03-01

    Most existing cascading failure mitigation strategy of power grids based on complex network ignores the impact of electrical characteristics on dynamic performance. In this paper, the robustness of the power grid under a power decentralization strategy is analysed through cascading failure simulation based on AC flow theory. The flow-sensitive (FS) centrality is introduced by integrating topological features and electrical properties to help determine the siting of the generation nodes. The simulation results of the IEEE-bus systems show that the flow-sensitive centrality method is a more stable and accurate approach and can enhance the robustness of the network remarkably. Through the study of the optimal flow-sensitive centrality selection for different networks, we find that the robustness of the network with obvious small-world effect depends more on contribution of the generation nodes detected by community structure, otherwise, contribution of the generation nodes with important influence on power flow is more critical. In addition, community structure plays a significant role in balancing the power flow distribution and further slowing the propagation of failures. These results are useful in power grid planning and cascading failure prevention.

  3. Levelized cost of energy and sensitivity analysis for the hydrogen-bromine flow battery

    Science.gov (United States)

    Singh, Nirala; McFarland, Eric W.

    2015-08-01

    The technoeconomics of the hydrogen-bromine flow battery are investigated. Using existing performance data the operating conditions were optimized to minimize the levelized cost of electricity using individual component costs for the flow battery stack and other system units. Several different configurations were evaluated including use of a bromine complexing agent to reduce membrane requirements. Sensitivity analysis of cost is used to identify the system elements most strongly influencing the economics. The stack lifetime and round-trip efficiency of the cell are identified as major factors on the levelized cost of electricity, along with capital components related to hydrogen storage, the bipolar plate, and the membrane. Assuming that an electrocatalyst and membrane with a lifetime of 2000 cycles can be identified, the lowest cost market entry system capital is 220 kWh-1 for a 4 h discharge system and for a charging energy cost of 0.04 kWh-1 the levelized cost of the electricity delivered is 0.40 kWh-1. With systems manufactured at large scales these costs are expected to be lower.

  4. Sensitivity Analysis to Control the Far-Wake Unsteadiness Behind Turbines

    Directory of Open Access Journals (Sweden)

    Esteban Ferrer

    2017-10-01

    Full Text Available We explore the stability of wakes arising from 2D flow actuators based on linear momentum actuator disc theory. We use stability and sensitivity analysis (using adjoints to show that the wake stability is controlled by the Reynolds number and the thrust force (or flow resistance applied through the turbine. First, we report that decreasing the thrust force has a comparable stabilising effect to a decrease in Reynolds numbers (based on the turbine diameter. Second, a discrete sensitivity analysis identifies two regions for suitable placement of flow control forcing, one close to the turbines and one far downstream. Third, we show that adding a localised control force, in the regions identified by the sensitivity analysis, stabilises the wake. Particularly, locating the control forcing close to the turbines results in an enhanced stabilisation such that the wake remains steady for significantly higher Reynolds numbers or turbine thrusts. The analysis of the controlled flow fields confirms that modifying the velocity gradient close to the turbine is more efficient to stabilise the wake than controlling the wake far downstream. The analysis is performed for the first flow bifurcation (at low Reynolds numbers which serves as a foundation of the stabilization technique but the control strategy is tested at higher Reynolds numbers in the final section of the paper, showing enhanced stability for a turbulent flow case.

  5. Flow analysis with WaSiM-ETH – model parameter sensitivity at different scales

    Directory of Open Access Journals (Sweden)

    J. Cullmann

    2006-01-01

    Full Text Available WaSiM-ETH (Gurtz et al., 2001, a widely used water balance simulation model, is tested for its suitability to serve for flow analysis in the context of rainfall runoff modelling and flood forecasting. In this paper, special focus is on the resolution of the process domain in space as well as in time. We try to couple model runs with different calculation time steps in order to reduce the effort arising from calculating the whole flow hydrograph at the hourly time step. We aim at modelling on the daily time step for water balance purposes, switching to the hourly time step whenever high-resolution information is necessary (flood forecasting. WaSiM-ETH is used at different grid resolutions, thus we try to become clear about being able to transfer the model in spatial resolution. We further use two different approaches for the overland flow time calculation within the sub-basins of the test watershed to gain insights about the process dynamics portrayed by the model. Our findings indicate that the model is very sensitive to time and space resolution and cannot be transferred across scales without recalibration.

  6. Flows of dioxins and furans in coastal food webs: inverse modeling, sensitivity analysis, and applications of linear system theory.

    Science.gov (United States)

    Saloranta, Tuomo M; Andersen, Tom; Naes, Kristoffer

    2006-01-01

    Rate constant bioaccumulation models are applied to simulate the flow of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the coastal marine food web of Frierfjorden, a contaminated fjord in southern Norway. We apply two different ways to parameterize the rate constants in the model, global sensitivity analysis of the models using Extended Fourier Amplitude Sensitivity Test (Extended FAST) method, as well as results from general linear system theory, in order to obtain a more thorough insight to the system's behavior and to the flow pathways of the PCDD/Fs. We calibrate our models against observed body concentrations of PCDD/Fs in the food web of Frierfjorden. Differences between the predictions from the two models (using the same forcing and parameter values) are of the same magnitude as their individual deviations from observations, and the models can be said to perform about equally well in our case. Sensitivity analysis indicates that the success or failure of the models in predicting the PCDD/F concentrations in the food web organisms highly depends on the adequate estimation of the truly dissolved concentrations in water and sediment pore water. We discuss the pros and cons of such models in understanding and estimating the present and future concentrations and bioaccumulation of persistent organic pollutants in aquatic food webs.

  7. Parametric uncertainty and global sensitivity analysis in a model of the carotid bifurcation: Identification and ranking of most sensitive model parameters.

    Science.gov (United States)

    Gul, R; Bernhard, S

    2015-11-01

    In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    Science.gov (United States)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  9. Phase-sensitive flow cytometer

    Energy Technology Data Exchange (ETDEWEB)

    Steinkamp, J.A.

    1992-12-31

    This report describes phase-sensitive flow cytometer (FCM) which provides additional FCM capability to use the fluorescence lifetime of one or more fluorochromes bound to single cells to provide additional information regarding the cells. The resulting fluorescence emission can be resolved into individual fluorescence signals if two fluorochromes are present or can be converted directly to a decay lifetime from a single fluorochrome. The excitation light for the fluorochromes is modulated to produce an amplitude modulated fluorescence pulse as the fluorochrome is excited in the FCM. The modulation signal also forms a reference signal that is phase-shifted a selected amount for subsequent mixing with the output modulated fluorescence intensity signal in phase-sensitive detection circuitry. The output from the phase-sensitive circuitry is then an individual resolved fluorochrome signal or a single fluorochrome decay lifetime, depending on the applied phase shifts.

  10. The Neopuff's PEEP valve is flow sensitive.

    LENUS (Irish Health Repository)

    Hawkes, Colin Patrick

    2011-03-01

    The current recommendation in setting up the Neopuff is to use a gas flow of 5-15 L\\/min. We investigated if the sensitivity of the positive end expiratory pressure (PEEP) valve varies at different flow rates within this range.

  11. Rainfall Variability and Landuse Conversion Impacts to Sensitivity of Citarum River Flow

    Directory of Open Access Journals (Sweden)

    Dyah Marganingrum

    2013-07-01

    Full Text Available The objective of this study is to determine the sensitivity of Citarum river flow to climate change and land conversion. It will provide the flow information that required in the water resources sustainability. Saguling reservoir is one of the strategic reservoirs, which 75% water is coming from the inflow of Upper Citarum measured at Nanjung station. Climate variability was identified as rainfall variability. Sensitivity was calculated as the elasticity value of discharge using three-variate model of statistical approach. The landuse conversion was calculated used GIS at 1994 and 2004. The results showed that elasticity at the Nanjung station and Saguling station decreased from 1.59 and 1.02 to 0.68 and 0.62 respectively. The decreasing occurred in the before the dam was built period (1950-1980 to the after reservoirs operated period (1986-2008. This value indicates that: 1 Citarum river flow is more sensitive to rainfall variability that recorded at Nanjung station than Saguling station, 2 rainfall character is more difficult to predict. The landuse analysis shows that forest area decrease to ± 27% and built up area increased to ± 26%. Those implied a minimum rainfall reduction to± 8% and minimum flow to ± 46%. Those were caused by land conversion and describing that the vegetation have function to maintain the base flow for sustainable water resource infrastructure.

  12. Applying Turbulence Models to Hydroturbine Flows: A Sensitivity Analysis Using the GAMM Francis Turbine

    Science.gov (United States)

    Lewis, Bryan; Cimbala, John; Wouden, Alex

    2011-11-01

    Turbulence models are generally developed to study common academic geometries, such as flat plates and channels. Creating quality computational grids for such geometries is trivial, and allows stringent requirements to be met for boundary layer grid refinement. However, engineering applications, such as flow through hydroturbines, require the analysis of complex, highly curved geometries. To produce body-fitted grids for such geometries, the mesh quality requirements must be relaxed. Relaxing these requirements, along with the complexity of rotating flows, forces turbulence models to be employed beyond their developed scope. This study explores the solution sensitivity to boundary layer grid quality for various turbulence models and boundary conditions currently implemented in OpenFOAM. The following models are resented: k-omega, k-omega SST, k-epsilon, realizable k-epsilon, and RNG k-epsilon. Standard wall functions, adaptive wall functions, and sub-grid integration are compared using various grid refinements. The chosen geometry is the GAMM Francis Turbine because experimental data and comparison computational results are available for this turbine. This research was supported by a grant from the DoE and a National Defense Science and Engineering Graduate Fellowship.

  13. A Sensitive Photometric Procedure for Cobalt Determination in Water Employing a Compact Multicommuted Flow Analysis System.

    Science.gov (United States)

    da Silva Magalhães, Ticiane; Reis, Boaventura F

    2017-09-01

    In this work, a multicommuted flow analysis procedure is proposed for the spectrophotometric determination of cobalt in fresh water, employing an instrument setup of downsized dimension and improved cost-effectiveness. The method is based on the catalytic effect of Co(II) on the Tiron oxidation by hydrogen peroxide in alkaline medium, forming a complex that absorbs radiation at 425 nm. The photometric detection was accomplished using a homemade light-emitting-diode (LED)-based photometer designed to use a flow cell with an optical path-length of 100 mm to improve sensitivity. After selecting adequate values for the flow system variables, adherence to the Beer-Lambert-Bouguer law was observed for standard solution concentrations in the range of 0.13-1.5 µg L -1 Co(II). Other useful features including a relative standard deviation of 2.0% (n = 11) for a sample with 0.49 µg L -1 Co(II), a detection limit of 0.06 µg L -1 Co(II) (n = 20), an analytical frequency of 42 sample determinations per hour, and waste generation of 1.5 mL per determination were achieved.

  14. Adjoint sensitivity theory for steady-state ground-water flow

    International Nuclear Information System (INIS)

    1983-11-01

    In this study, adjoint sensitivity theory is developed for equations of two-dimensional steady-state flow in a confined aquifer. Both the primary flow equation and the adjoint sensitivity equation are solved using the Galerkin finite element method. The developed computer code is used to investigate the regional flow parameters of the Leadville Formation of the Paradox Basin in Utah and the Wolcamp carbonate/sandstone aquifer of the Palo Duro Basin in the Texas Panhandle. Two performance measures are evaluated, local heads and velocity in the vicinity of potential high-level nuclear waste repositories. The results illustrate the sensitivity of calculated local heads to the boundary conditions. Local velocity-related performance measures are more sensitive to hydraulic conductivities. The uncertainty in the performance measure is a function of the parameter sensitivity, parameter variance and the correlation between parameters. Given a parameter covariance matrix, the uncertainty of the performance measure can be calculated. Although no results are presented here, the implications of uncertainty calculations for the two studies are discussed. 18 references, 25 figures

  15. Nominal Range Sensitivity Analysis of peak radionuclide concentrations in randomly heterogeneous aquifers

    International Nuclear Information System (INIS)

    Cadini, F.; De Sanctis, J.; Cherubini, A.; Zio, E.; Riva, M.; Guadagnini, A.

    2012-01-01

    Highlights: ► Uncertainty quantification problem associated with the radionuclide migration. ► Groundwater transport processes simulated within a randomly heterogeneous aquifer. ► Development of an automatic sensitivity analysis for flow and transport parameters. ► Proposal of a Nominal Range Sensitivity Analysis approach. ► Analysis applied to the performance assessment of a nuclear waste repository. - Abstract: We consider the problem of quantification of uncertainty associated with radionuclide transport processes within a randomly heterogeneous aquifer system in the context of performance assessment of a near-surface radioactive waste repository. Radionuclide migration is simulated at the repository scale through a Monte Carlo scheme. The saturated groundwater flow and transport equations are then solved at the aquifer scale for the assessment of the expected radionuclide peak concentration at a location of interest. A procedure is presented to perform the sensitivity analysis of this target environmental variable to key parameters that characterize flow and transport processes in the subsurface. The proposed procedure is exemplified through an application to a realistic case study.

  16. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  17. The Neopuff's PEEP valve is flow sensitive.

    LENUS (Irish Health Repository)

    Hawkes, Colin Patrick

    2012-01-31

    AIM: The current recommendation in setting up the Neopuff is to use a gas flow of 5-15 L\\/min. We investigated if the sensitivity of the positive end expiratory pressure (PEEP) valve varies at different flow rates within this range. METHODS: Five Neopuffs were set up to provide a PEEP of 5 cm H(2) O. The number of clockwise revolutions to complete occlusion of the PEEP valve and the mean and range of pressures at each quarter clockwise revolution were recorded at gas flow rates between 5 and 15 L\\/min. Results: At 5, 10 and 15 L\\/min, 0.5, 1.7 and 3.4 full clockwise rotations were required to completely occlude the PEEP valve, and pressures rose from 5 to 11.4, 18.4 and 21.5 cm H(2) O, respectively. At a flow rate of 5 L\\/min, half a rotation of the PEEP dial resulted in a rise in PEEP from 5 to 11.4cm H(2) O. At 10 L\\/min, half a rotation resulted in a rise from 5 to 7.7cm H(2) O, and at 15 L\\/min PEEP rose from 5 to 6.8cm H(2) O. CONCLUSION: Users of the Neopuff should be aware that the PEEP valve is more sensitive at lower flow rates and that half a rotation of the dial at 5 L\\/min gas flow can more than double the PEEP.

  18. Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry

    Science.gov (United States)

    West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat

    2016-01-01

    The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.

  19. Geostatistical and adjoint sensitivity techniques applied to a conceptual model of ground-water flow in the Paradox Basin, Utah

    International Nuclear Information System (INIS)

    Metcalfe, D.E.; Campbell, J.E.; RamaRao, B.S.; Harper, W.V.; Battelle Project Management Div., Columbus, OH)

    1985-01-01

    Sensitivity and uncertainty analysis are important components of performance assessment activities for potential high-level radioactive waste repositories. The application of geostatistical and adjoint sensitivity techniques to aid in the calibration of an existing conceptual model of ground-water flow is demonstrated for the Leadville Limestone in Paradox Basin, Utah. The geostatistical method called kriging is used to statistically analyze the measured potentiometric data for the Leadville. This analysis consists of identifying anomalous data and data trends and characterizing the correlation structure between data points. Adjoint sensitivity analysis is then performed to aid in the calibration of a conceptual model of ground-water flow to the Leadville measured potentiometric data. Sensitivity derivatives of the fit between the modeled Leadville potentiometric surface and the measured potentiometric data to model parameters and boundary conditions are calculated by the adjoint method. These sensitivity derivatives are used to determine which model parameter and boundary condition values should be modified to most efficiently improve the fit of modeled to measured potentiometric conditions

  20. Electromagnetic holographic sensitivity field of two-phase flow in horizontal wells

    Science.gov (United States)

    Zhang, Kuo; Wu, Xi-Ling; Yan, Jing-Fu; Cai, Jia-Tie

    2017-03-01

    Electromagnetic holographic data are characterized by two modes, suggesting that image reconstruction requires a dual-mode sensitivity field as well. We analyze an electromagnetic holographic field based on tomography theory and Radon inverse transform to derive the expression of the electromagnetic holographic sensitivity field (EMHSF). Then, we apply the EMHSF calculated by using finite-element methods to flow simulations and holographic imaging. The results suggest that the EMHSF based on the partial derivative of radius of the complex electric potential φ is closely linked to the Radon inverse transform and encompasses the sensitivities of the amplitude and phase data. The flow images obtained with inversion using EMHSF better agree with the actual flow patterns. The EMHSF overcomes the limitations of traditional single-mode sensitivity fields.

  1. Optoelectronic iron detectors for pharmaceutical flow analysis.

    Science.gov (United States)

    Rybkowska, Natalia; Koncki, Robert; Strzelak, Kamil

    2017-10-25

    Compact flow-through optoelectronic detectors fabricated by pairing of light emitting diodes have been applied for development of economic flow analysis systems dedicated for iron ions determination. Three analytical methods with different chromogens selectively recognizing iron ions have been compared. Ferrozine and ferene S based methods offer higher sensitivity and slightly lower detection limits than method with 1,10-phenantroline, but narrower ranges of linear response. Each system allows detection of iron in micromolar range of concentration with comparable sample throughput (20 injections per hour). The developed flow analysis systems have been successfully applied for determination of iron in diet supplements. The utility of developed analytical systems for iron release studies from drug formulations has also been demonstrated. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Computational Study of pH-sensitive Hydrogel-based Microfluidic Flow Controllers

    Science.gov (United States)

    Kurnia, Jundika C.; Birgersson, Erik; Mujumdar, Arun S.

    2011-01-01

    This computational study investigates the sensing and actuating behavior of a pH-sensitive hydrogel-based microfluidic flow controller. This hydrogel-based flow controller has inherent advantage in its unique stimuli-sensitive properties, removing the need for an external power supply. The predicted swelling behavior the hydrogel is validated with steady-state and transient experiments. We then demonstrate how the model is implemented to study the sensing and actuating behavior of hydrogels for different microfluidic flow channel/hydrogel configurations: e.g., for flow in a T-junction with single and multiple hydrogels. In short, the results suggest that the response of the hydrogel-based flow controller is slow. Therefore, two strategies to improve the response rate of the hydrogels are proposed and demonstrated. Finally, we highlight that the model can be extended to include other stimuli-responsive hydrogels such as thermo-, electric-, and glucose-sensitive hydrogels. PMID:24956303

  3. Three-dimensional stability, receptivity and sensitivity of non-Newtonian flows inside open cavities

    International Nuclear Information System (INIS)

    Citro, Vincenzo; Giannetti, Flavio; Pralits, Jan O

    2015-01-01

    We investigate the stability properties of flows over an open square cavity for fluids with shear-dependent viscosity. Analysis is carried out in context of the linear theory using a normal-mode decomposition. The incompressible Cauchy equations, with a Carreau viscosity model, are discretized with a finite-element method. The characteristics of direct and adjoint eigenmodes are analyzed and discussed in order to understand the receptivity features of the flow. Furthermore, we identify the regions of the flow that are more sensitive to spatially localized feedback by building a spatial map obtained from the product between the direct and adjoint eigenfunctions. Analysis shows that the first global linear instability of the steady flow is a steady or unsteady three-dimensionl bifurcation depending on the value of the power-law index n. The instability mechanism is always located inside the cavity and the linear stability results suggest a strong connection with the classical lid-driven cavity problem. (paper)

  4. Hierarchical Nanogold Labels to Improve the Sensitivity of Lateral Flow Immunoassay

    Science.gov (United States)

    Serebrennikova, Kseniya; Samsonova, Jeanne; Osipov, Alexander

    2018-06-01

    Lateral flow immunoassay (LFIA) is a widely used express method and offers advantages such as a short analysis time, simplicity of testing and result evaluation. However, an LFIA based on gold nanospheres lacks the desired sensitivity, thereby limiting its wide applications. In this study, spherical nanogold labels along with new types of nanogold labels such as gold nanopopcorns and nanostars were prepared, characterized, and applied for LFIA of model protein antigen procalcitonin. It was found that the label with a structure close to spherical provided more uniform distribution of specific antibodies on its surface, indicative of its suitability for this type of analysis. LFIA using gold nanopopcorns as a label allowed procalcitonin detection over a linear range of 0.5-10 ng mL-1 with the limit of detection of 0.1 ng mL-1, which was fivefold higher than the sensitivity of the assay with gold nanospheres. Another approach to improve the sensitivity of the assay included the silver enhancement method, which was used to compare the amplification of LFIA for procalcitonin detection. The sensitivity of procalcitonin determination by this method was 10 times better the sensitivity of the conventional LFIA with gold nanosphere as a label. The proposed approach of LFIA based on gold nanopopcorns improved the detection sensitivity without additional steps and prevented the increased consumption of specific reagents (antibodies).

  5. Sensitivity analysis

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...

  6. Stand-alone core sensitivity and uncertainty analysis of ALFRED from Monte Carlo simulations

    International Nuclear Information System (INIS)

    Pérez-Valseca, A.-D.; Espinosa-Paredes, G.; François, J.L.; Vázquez Rodríguez, A.; Martín-del-Campo, C.

    2017-01-01

    Highlights: • Methodology based on Monte Carlo simulation. • Sensitivity analysis of Lead Fast Reactor (LFR). • Uncertainty and regression analysis of LFR. • 10% change in the core inlet flow, the response in thermal power change is 0.58%. • 2.5% change in the inlet lead temperature the response is 1.87% in power. - Abstract: The aim of this paper is the sensitivity and uncertainty analysis of a Lead-Cooled Fast Reactor (LFR) based on Monte Carlo simulation of sizes up to 2000. The methodology developed in this work considers the uncertainty of sensitivities and uncertainty of output variables due to a single-input-variable variation. The Advanced Lead fast Reactor European Demonstrator (ALFRED) is analyzed to determine the behavior of the essential parameters due to effects of mass flow and temperature of liquid lead. The ALFRED core mathematical model developed in this work is fully transient, which takes into account the heat transfer in an annular fuel pellet design, the thermo-fluid in the core, and the neutronic processes, which are modeled with point kinetic with feedback fuel temperature and expansion effects. The sensitivity evaluated in terms of the relative standard deviation (RSD) showed that for 10% change in the core inlet flow, the response in thermal power change is 0.58%, and for 2.5% change in the inlet lead temperature is 1.87%. The regression analysis with mass flow rate as the predictor variable showed statistically valid cubic correlations for neutron flux and linear relationship neutron flux as a function of the lead temperature. No statistically valid correlation was observed for the reactivity as a function of the mass flow rate and for the lead temperature. These correlations are useful for the study, analysis, and design of any LFR.

  7. Improved sensitivity and limit-of-detection of lateral flow devices using spatial constrictions of the flow-path.

    Science.gov (United States)

    Katis, Ioannis N; He, Peijun J W; Eason, Robert W; Sones, Collin L

    2018-05-03

    We report on the use of a laser-direct write (LDW) technique that allows the fabrication of lateral flow devices with enhanced sensitivity and limit of detection. This manufacturing technique comprises the dispensing of a liquid photopolymer at specific regions of a nitrocellulose membrane and its subsequent photopolymerisation to create impermeable walls inside the volume of the membrane. These polymerised structures are intentionally designed to create fluidic channels which are constricted over a specific length that spans the test zone within which the sample interacts with pre-deposited reagents. Experiments were conducted to show how these constrictions alter the fluid flow rate and the test zone area within the constricted channel geometries. The slower flow rate and smaller test zone area result in the increased sensitivity and lowered limit of detection for these devices. We have quantified these via the improved performance of a C-Reactive Protein (CRP) sandwich assay on our lateral flow devices with constricted flow paths which demonstrate an improvement in its sensitivity by 62x and in its limit of detection by 30x when compared to a standard lateral flow CRP device. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  8. Sensitivity of Regulated Flow Regimes to Climate Change in the Western United States

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Tian [Pacific Northwest National Laboratory, Richland, Washington; Voisin, Nathalie [Pacific Northwest National Laboratory, Richland, Washington; Leng, Guoyong [Pacific Northwest National Laboratory, Richland, Washington; Huang, Maoyi [Pacific Northwest National Laboratory, Richland, Washington; Kraucunas, Ian [Pacific Northwest National Laboratory, Richland, Washington

    2018-03-01

    Water management activities or flow regulations modify water fluxes at the land surface and affect water resources in space and time. We hypothesize that flow regulations change the sensitivity of river flow to climate change with respect to unmanaged water resources. Quantifying these changes in sensitivity could help elucidate the impacts of water management at different spatiotemporal scales and inform climate adaptation decisions. In this study, we compared the emergence of significant changes in natural and regulated river flow regimes across the Western United States from simulations driven by multiple climate models and scenarios. We find that significant climate change-induced alterations in natural flow do not cascade linearly through water management activities. At the annual time scale, 50% of the Hydrologic Unit Code 4 (HUC4) sub-basins over the Western U.S. regions tend to have regulated flow regime more sensitive to the climate change than natural flow regime. Seasonality analyses show that the sensitivity varies remarkably across the seasons. We also find that the sensitivity is related to the level of water management. For 35% of the HUC4 sub-basins with the highest level of water management, the summer and winter flows tend to show a heightened sensitivity to climate change due to the complexity of joint reservoir operations. We further demonstrate that the impacts of considering water management in models are comparable to those that arises from uncertainties across climate models and emission scenarios. This prompts further climate adaptation studies research about nonlinearity effects of climate change through water management activities.

  9. Uncertainty analysis of power monitoring transit time ultrasonic flow meters

    International Nuclear Information System (INIS)

    Orosz, A.; Miller, D. W.; Christensen, R. N.; Arndt, S.

    2006-01-01

    A general uncertainty analysis is applied to chordal, transit time ultrasonic flow meters that are used in nuclear power plant feedwater loops. This investigation focuses on relationships between the major parameters of the flow measurement. For this study, mass flow rate is divided into three components, profile factor, density, and a form of volumetric flow rate. All system parameters are used to calculate values for these three components. Uncertainty is analyzed using a perturbation method. Sensitivity coefficients for major system parameters are shown, and these coefficients are applicable to a range of ultrasonic flow meters used in similar applications. Also shown is the uncertainty to be expected for density along with its relationship to other system uncertainties. One other conclusion is that pipe diameter sensitivity coefficients may be a function of the calibration technique used. (authors)

  10. Analysis of the cavitating flow induced by an ultrasonic horn – Numerical 3D simulation for the analysis of vapour structures and the assessment of erosion-sensitive areas

    Directory of Open Access Journals (Sweden)

    Mottyll Stephan

    2014-03-01

    Full Text Available This paper reports the outcome of a numerical study of ultrasonic cavitation using a CFD flow algorithm based on a compressible density-based finite volume method with a low-Machnumber consistent flux function and an explicit time integration [15; 18] in combination with an erosion-detecting flow analysis procedure. The model is validated against erosion data of an ultrasonic horn for different gap widths between the horn tip and a counter sample which has been intensively investigated in previous material studies at the Ruhr University Bochum [23] as well as on first optical in-house flow measurement data which is presented in a companion paper [13]. Flow features such as subharmonic cavitation oscillation frequencies as well as constricted vapour cloud structures can also be observed by the vapour regions predicted in our simulation as well as by the detected collapse event field (collapse detector [12]. With a statistical analysis of transient wall loads we can determine the erosion sensitive areas qualitatively. Our simulation method can reproduce the influence of the gap width on vapour structure and on location of cavitation erosion.

  11. Sensitivity analysis of MIDAS tests using SPACE code. Effect of nodalization

    International Nuclear Information System (INIS)

    Eom, Shin; Oh, Seung-Jong; Diab, Aya

    2018-01-01

    The nodalization sensitivity analysis for the ECCS (Emergency Core Cooling System) bypass phe�nomena was performed using the SPACE (Safety and Performance Analysis CodE) thermal hydraulic analysis computer code. The results of MIDAS (Multi-�dimensional Investigation in Downcomer Annulus Simulation) test were used. The MIDAS test was conducted by the KAERI (Korea Atomic Energy Research Institute) for the performance evaluation of the ECC (Emergency Core Cooling) bypass phenomenon in the DVI (Direct Vessel Injection) system. The main aim of this study is to examine the sensitivity of the SPACE code results to the number of thermal hydraulic channels used to model the annulus region in the MIDAS experiment. The numerical model involves three nodalization cases (4, 6, and 12 channels) and the result show that the effect of nodalization on the bypass fraction for the high steam flow rate MIDAS tests is minimal. For computational efficiency, a 4 channel representation is recommended for the SPACE code nodalization. For the low steam flow rate tests, the SPACE code over-�predicts the bypass fraction irrespective of the nodalization finesse. The over-�prediction at low steam flow may be attributed to the difficulty to accurately represent the flow regime in the vicinity of the broken cold leg.

  12. Sensitivity analysis of MIDAS tests using SPACE code. Effect of nodalization

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Shin; Oh, Seung-Jong; Diab, Aya [KEPCO International Nuclear Graduate School (KINGS), Ulsan (Korea, Republic of). Dept. of NPP Engineering

    2018-02-15

    The nodalization sensitivity analysis for the ECCS (Emergency Core Cooling System) bypass phe�nomena was performed using the SPACE (Safety and Performance Analysis CodE) thermal hydraulic analysis computer code. The results of MIDAS (Multi-�dimensional Investigation in Downcomer Annulus Simulation) test were used. The MIDAS test was conducted by the KAERI (Korea Atomic Energy Research Institute) for the performance evaluation of the ECC (Emergency Core Cooling) bypass phenomenon in the DVI (Direct Vessel Injection) system. The main aim of this study is to examine the sensitivity of the SPACE code results to the number of thermal hydraulic channels used to model the annulus region in the MIDAS experiment. The numerical model involves three nodalization cases (4, 6, and 12 channels) and the result show that the effect of nodalization on the bypass fraction for the high steam flow rate MIDAS tests is minimal. For computational efficiency, a 4 channel representation is recommended for the SPACE code nodalization. For the low steam flow rate tests, the SPACE code over-�predicts the bypass fraction irrespective of the nodalization finesse. The over-�prediction at low steam flow may be attributed to the difficulty to accurately represent the flow regime in the vicinity of the broken cold leg.

  13. A Highly Sensitive Multicommuted Flow Analysis Procedure for Photometric Determination of Molybdenum in Plant Materials without a Solvent Extraction Step

    Directory of Open Access Journals (Sweden)

    Felisberto G. Santos

    2017-01-01

    Full Text Available A highly sensitive analytical procedure for photometric determination of molybdenum in plant materials was developed and validated. This procedure is based on the reaction of Mo(V with thiocyanate ions (SCN− in acidic medium to form a compound that can be monitored at 474 nm and was implemented employing a multicommuted flow analysis setup. Photometric detection was performed using an LED-based photometer coupled to a flow cell with a long optical path length (200 mm to achieve high sensitivity, allowing Mo(V determination at a level of μg L−1 without the use of an organic solvent extraction step. After optimization of operational conditions, samples of digested plant materials were analyzed employing the proposed procedure. The accuracy was assessed by comparing the obtained results with those of a reference method, with an agreement observed at 95% confidence level. In addition, a detection limit of 9.1 μg L−1, a linear response (r=0.9969 over the concentration range of 50–500 μg L−1, generation of only 3.75 mL of waste per determination, and a sampling rate of 51 determinations per hour were achieved.

  14. Modeling and sensitivity analysis on the transport of aluminum oxide nanoparticles in saturated sand: effects of ionic strength, flow rate, and nanoparticle concentration.

    Science.gov (United States)

    Rahman, Tanzina; Millwater, Harry; Shipley, Heather J

    2014-11-15

    Aluminum oxide nanoparticles have been widely used in various consumer products and there are growing concerns regarding their exposure in the environment. This study deals with the modeling, sensitivity analysis and uncertainty quantification of one-dimensional transport of nano-sized (~82 nm) aluminum oxide particles in saturated sand. The transport of aluminum oxide nanoparticles was modeled using a two-kinetic-site model with a blocking function. The modeling was done at different ionic strengths, flow rates, and nanoparticle concentrations. The two sites representing fast and slow attachments along with a blocking term yielded good agreement with the experimental results from the column studies of aluminum oxide nanoparticles. The same model was used to simulate breakthrough curves under different conditions using experimental data and calculated 95% confidence bounds of the generated breakthroughs. The sensitivity analysis results showed that slow attachment was the most sensitive parameter for high influent concentrations (e.g. 150 mg/L Al2O3) and the maximum solid phase retention capacity (related to blocking function) was the most sensitive parameter for low concentrations (e.g. 50 mg/L Al2O3). Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Sensitivity/uncertainty analysis of a borehole scenario comparing Latin Hypercube Sampling and deterministic sensitivity approaches

    International Nuclear Information System (INIS)

    Harper, W.V.; Gupta, S.K.

    1983-10-01

    A computer code was used to study steady-state flow for a hypothetical borehole scenario. The model consists of three coupled equations with only eight parameters and three dependent variables. This study focused on steady-state flow as the performance measure of interest. Two different approaches to sensitivity/uncertainty analysis were used on this code. One approach, based on Latin Hypercube Sampling (LHS), is a statistical sampling method, whereas, the second approach is based on the deterministic evaluation of sensitivities. The LHS technique is easy to apply and should work well for codes with a moderate number of parameters. Of deterministic techniques, the direct method is preferred when there are many performance measures of interest and a moderate number of parameters. The adjoint method is recommended when there are a limited number of performance measures and an unlimited number of parameters. This unlimited number of parameters capability can be extremely useful for finite element or finite difference codes with a large number of grid blocks. The Office of Nuclear Waste Isolation will use the technique most appropriate for an individual situation. For example, the adjoint method may be used to reduce the scope to a size that can be readily handled by a technique such as LHS. Other techniques for sensitivity/uncertainty analysis, e.g., kriging followed by conditional simulation, will be used also. 15 references, 4 figures, 9 tables

  16. Sensitivity analysis of a coupled hydro-mechanical paleo-climate model of density-dependent groundwater flow in discretely fractured crystalline rock

    International Nuclear Information System (INIS)

    Normani, S.D.; Sykes, J.F.

    2011-01-01

    A high resolution three-dimensional sub-regional scale (104 km 2 ) density-dependent, discretely fractured groundwater flow model with hydro-mechanical coupling and pseudo-permafrost was developed from a larger 5734 km 2 regional-scale groundwater flow model of a Canadian Shield setting. The objective of the work is to determine the sensitivity of modelled groundwater system evolution to the hydro-mechanical parameters. The discrete fracture dual continuum numerical model FRAC3DVS-OPG was used for all simulations. A discrete fracture network model delineated from surface features was superimposed onto an approximate 790 000 element domain mesh with approximately 850 000 nodes. Orthogonal fracture faces (between adjacent finite element grid blocks) were used to best represent the irregular discrete fracture zone network. Interconnectivity of the permeable fracture zones is an important pathway for the possible migration and subsequent reduction in groundwater and contaminant residence times. The crystalline rock matrix between these structural discontinuities was assigned mechanical and flow properties characteristic of those reported for the Canadian Shield. The variation of total dissolved solids with depth was assigned using literature data for the Canadian Shield. Performance measures for the sensitivity analysis include equivalent freshwater heads, environmental heads, linear velocities, and depth of penetration by conservative non-decaying tracers released at the surface. A 121 000 year North American continental scale paleo-climate simulation was applied to the domain with ice-sheet histories estimated by the University of Toronto Glacial Systems Model (UofT GSM). Hydro-mechanical coupling between the rock matrix and the pore fluid, due to the ice sheet normal stress, was included in the simulations. The flow model included the influence of vertical strain and assumed that areal loads were homogeneous. Permafrost depth was applied as a permeability reduction

  17. Application of perturbation methods for sensitivity analysis for nuclear power plant steam generators

    International Nuclear Information System (INIS)

    Gurjao, Emir Candeia

    1996-02-01

    The differential and GPT (Generalized Perturbation Theory) formalisms of the Perturbation Theory were applied in this work to a simplified U-tubes steam generator model to perform sensitivity analysis. The adjoint and importance equations, with the corresponding expressions for the sensitivity coefficients, were derived for this steam generator model. The system was numerically was numerically solved in a Fortran program, called GEVADJ, in order to calculate the sensitivity coefficients. A transient loss of forced primary coolant in the nuclear power plant Angra-1 was used as example case. The average and final values of functionals: secondary pressure and enthalpy were studied in relation to changes in the secondary feedwater flow, enthalpy and total volume in secondary circuit. Absolute variations in the above functionals were calculated using the perturbative methods, considering the variations in the feedwater flow and total secondary volume. Comparison with the same variations obtained via direct model showed in general good agreement, demonstrating the potentiality of perturbative methods for sensitivity analysis of nuclear systems. (author)

  18. Descriptive analysis of the masticatory and salivary functions and gustatory sensitivity in healthy children.

    Science.gov (United States)

    Marquezin, Maria Carolina Salomé; Pedroni-Pereira, Aline; Araujo, Darlle Santos; Rosar, João Vicente; Barbosa, Taís S; Castelo, Paula Midori

    2016-08-01

    The objective of this study is to better understand salivary and masticatory characteristics, this study evaluated the relationship among salivary parameters, bite force (BF), masticatory performance (MP) and gustatory sensitivity in healthy children. The secondary outcome was to evaluate possible gender differences. One hundred and sixteen eutrophic subjects aged 7-11 years old were evaluated, caries-free and with no definite need of orthodontic treatment. Salivary flow rate and pH, total protein (TP), alpha-amylase (AMY), calcium (CA) and phosphate (PHO) concentrations were determined in stimulated (SS) and unstimulated saliva (US). BF and MP were evaluated using digital gnathodynamometer and fractional sieving method, respectively. Gustatory sensitivity was determined by detecting the four primary tastes (sweet, salty, sour and bitter) in three different concentrations. Data were evaluated using descriptive statistics, Mann-Whitney/t-test, Spearman correlation and multiple regression analysis, considering α = 0.05. Significant positive correlation between taste and age was observed. CA and PHO concentrations correlated negatively with salivary flow and pH; sweet taste scores correlated with AMY concentrations and bitter taste sensitivity correlated with US flow rate (p salivary, masticatory characteristics and gustatory sensitivity was observed. The regression analysis showed a weak relationship between the distribution of chewed particles among the different sieves and BF. The concentration of some analytes was influenced by salivary flow and pH. Age, saliva flow and AMY concentrations influenced gustatory sensitivity. In addition, salivary, masticatory and taste characteristics did not differ between genders, and only a weak relation between MP and BF was observed.

  19. Flow-sensitive type recovery in linear-log time

    DEFF Research Database (Denmark)

    Adams, Michael D.; Keep, Andrew W.; Midtgaard, Jan

    2011-01-01

    The flexibility of dynamically typed languages such as JavaScript, Python, Ruby, and Scheme comes at the cost of run-time type checks. Some of these checks can be eliminated via control-flow analysis. However, traditional control-flow analysis (CFA) is not ideal for this task as it ignores flow...

  20. Hypersonic Separated Flows About "Tick" Configurations With Sensitivity to Model Design

    Science.gov (United States)

    Moss, J. N.; O'Byrne, S.; Gai, S. L.

    2014-01-01

    This paper presents computational results obtained by applying the direct simulation Monte Carlo (DSMC) method for hypersonic nonequilibrium flow about "tick-shaped" model configurations. These test models produces a complex flow where the nonequilibrium and rarefied aspects of the flow are initially enhanced as the flow passes over an expansion surface, and then the flow encounters a compression surface that can induce flow separation. The resulting flow is such that meaningful numerical simulations must have the capability to account for a significant range of rarefaction effects; hence the application of the DSMC method in the current study as the flow spans several flow regimes, including transitional, slip, and continuum. The current focus is to examine the sensitivity of both the model surface response (heating, friction and pressure) and flowfield structure to assumptions regarding surface boundary conditions and more extensively the impact of model design as influenced by leading edge configuration as well as the geometrical features of the expansion and compression surfaces. Numerical results indicate a strong sensitivity to both the extent of the leading edge sharpness and the magnitude of the leading edge bevel angle. Also, the length of the expansion surface for a fixed compression surface has a significant impact on the extent of separated flow.

  1. Flow Visualization at Cryogenic Conditions Using a Modified Pressure Sensitive Paint Approach

    Science.gov (United States)

    Watkins, A. Neal; Goad, William K.; Obara, Clifford J.; Sprinkle, Danny R.; Campbell, Richard L.; Carter, Melissa B.; Pendergraft, Odis C., Jr.; Bell, James H.; Ingram, JoAnne L.; Oglesby, Donald M.

    2005-01-01

    A modification to the Pressure Sensitive Paint (PSP) method was used to visualize streamlines on a Blended Wing Body (BWB) model at full-scale flight Reynolds numbers. In order to achieve these conditions, the tests were carried out in the National Transonic Facility operating under cryogenic conditions in a nitrogen environment. Oxygen is required for conventional PSP measurements, and several tests have been successfully completed in nitrogen environments by injecting small amounts (typically < 3000 ppm) of oxygen into the flow. A similar technique was employed here, except that air was purged through pressure tap orifices already existent on the model surface, resulting in changes in the PSP wherever oxygen was present. The results agree quite well with predicted results obtained through computational fluid dynamics analysis (CFD), which show this to be a viable technique for visualizing flows without resorting to more invasive procedures such as oil flow or minitufts.

  2. Increased flow sensitivity from gradient recalled echoes and short TRs

    International Nuclear Information System (INIS)

    Hearshen, D.O.; Froelich, J.W.; Wehrli, F.W.; Haggar, A.M.; Shimakawa, A.

    1986-01-01

    Time-of-flight effects from flow have been characterized in spin-echo images. ''Paradoxical'' enhancement and flow void are observed. Similar enhancement is seen on GRASS images. With no flow void and gradients existing throughout the volume, spins experiencing radio-frequency pulses will give rise to signals even for fast flow, providing a greater velocity sensitivity. GRASS images were obtained from a volunteer with a blood pressure cuff placed over the right thigh. With the cuff inflated, flow in the popliteal vein results in signal saturation. Increasing TR increases intensity in the popliteal vein relative to other vessels. This suggests a clinical role for the technique in assessment of slow flow

  3. Uncertainty Quantification and Global Sensitivity Analysis of Subsurface Flow Parameters to Gravimetric Variations During Pumping Tests in Unconfined Aquifers

    Science.gov (United States)

    Maina, Fadji Zaouna; Guadagnini, Alberto

    2018-01-01

    We study the contribution of typically uncertain subsurface flow parameters to gravity changes that can be recorded during pumping tests in unconfined aquifers. We do so in the framework of a Global Sensitivity Analysis and quantify the effects of uncertainty of such parameters on the first four statistical moments of the probability distribution of gravimetric variations induced by the operation of the well. System parameters are grouped into two main categories, respectively, governing groundwater flow in the unsaturated and saturated portions of the domain. We ground our work on the three-dimensional analytical model proposed by Mishra and Neuman (2011), which fully takes into account the richness of the physical process taking place across the unsaturated and saturated zones and storage effects in a finite radius pumping well. The relative influence of model parameter uncertainties on drawdown, moisture content, and gravity changes are quantified through (a) the Sobol' indices, derived from a classical decomposition of variance and (b) recently developed indices quantifying the relative contribution of each uncertain model parameter to the (ensemble) mean, skewness, and kurtosis of the model output. Our results document (i) the importance of the effects of the parameters governing the unsaturated flow dynamics on the mean and variance of local drawdown and gravity changes; (ii) the marked sensitivity (as expressed in terms of the statistical moments analyzed) of gravity changes to the employed water retention curve model parameter, specific yield, and storage, and (iii) the influential role of hydraulic conductivity of the unsaturated and saturated zones to the skewness and kurtosis of gravimetric variation distributions. The observed temporal dynamics of the strength of the relative contribution of system parameters to gravimetric variations suggest that gravity data have a clear potential to provide useful information for estimating the key hydraulic

  4. Improved Diffuse Fluorescence Flow Cytometer Prototype for High Sensitivity Detection of Rare Circulating Cells In Vivo

    Science.gov (United States)

    Pestana, Noah Benjamin

    Accurate quantification of circulating cell populations is important in many areas of pre-clinical and clinical biomedical research, for example, in the study of cancer metastasis or the immune response following tissue and organ transplants. Normally this is done "ex-vivo" by drawing and purifying a small volume of blood and then analyzing it with flow cytometry, hemocytometry or microfludic devices, but the sensitivity of these techniques are poor and the process of handling samples has been shown to affect cell viability and behavior. More recently "in vivo flow cytometry" (IVFC) techniques have been developed where fluorescently-labeled cells flowing in a small blood vessel in the ear or retina are analyzed, but the sensitivity is generally poor due to the small sampling volume. To address this, our group recently developed a method known as "Diffuse Fluorescence Flow Cytometry" (DFFC) that allows detection and counting of rare circulating cells with diffuse photons, offering extremely high single cell counting sensitivity. In this thesis, an improved DFFC prototype was designed and validated. The chief improvements were three-fold, i) improved optical collection efficiency, ii) improved detection electronics, and iii) development of a method to mitigate motion artifacts during in vivo measurements. In combination, these improvements yielded an overall instrument detection sensitivity better than 1 cell/mL in vivo, which is the most sensitive IVFC system reported to date. Second, development and validation of a low-cost microfluidic device reader for analysis of ocular fluids is described. We demonstrate that this device has equivalent or better sensitivity and accuracy compared a fluorescence microscope, but at an order-of-magnitude reduced cost with simplified operation. Future improvements to both instruments are also discussed.

  5. Sensitivity to draught in turbulent air flows

    Energy Technology Data Exchange (ETDEWEB)

    Todde, V

    1998-09-01

    Even though the ventilation system is designed to supply air flows at constant low velocity and controlled temperature, the resulting air movement in rooms is strongly characterised by random fluctuations. When an air flow is supplied from an inlet, a shear layer forms between the incoming and the standstill air in the room, and large scale vortices develops by coalescence of the vorticity shed at the inlet of the air supply. After a characteristically downstream distance, large scale vortices loose their identity because of the development of cascading eddies and transition to turbulence. The interaction of these vortical structures will rise a complicated three dimensional air movement affected by fluctuations whose frequencies could vary from fractions of Hz to several KHz. The perception and sensitivity to the cooling effect enhanced by these air movements depend on a number of factors interacting with each other: physical properties of the air flow, part and extension of the skin surface exposed to the air flow, exposure duration, global thermal condition, gender and posture of the person. Earlier studies were concerned with the percentage of dissatisfied subjects as a function of air velocity and temperature. Recently, experimental observations have shown that also the fluctuations, the turbulence intensity and the direction of air velocity have an important impact on draught discomfort. Two experimental investigations have been developed to observe the human reaction to horizontal air movements on bared skin surfaces, hands and neck. Attention was concentrated on the effects of relative turbulence intensity of air velocity and exposure duration on perception and sensitivity to the air movement. The air jet flows, adopted for the draught experiment in the neck, were also the object of an experimental study. This experiment was designed to observe the centre-line velocity of an isothermal circular air jet, as a function of the velocity properties at the outlet

  6. Uncertainty and sensitivity analysis for two-phase flow in the vicinity of the repository in the 1996 performance assessment for the Waste Isolation Pilot Plant: Disturbed conditions

    International Nuclear Information System (INIS)

    HELTON, JON CRAIG; BEAN, J.E.; ECONOMY, K.; GARNER, J.W.; MACKINNON, ROBERT J.; MILLER, JOEL D.; SCHREIBER, J.D.; VAUGHN, PALMER

    2000-01-01

    Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP) are presented for two-phase flow in the vicinity of the repository under disturbed conditions resulting from drilling intrusions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformations are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure and brine flow from the repository to the Culebra Dolomite are potentially the most important in PA for the WIPP. Subsequent to a drilling intrusion repository pressure was dominated by borehole permeability and generally below the level (i.e., 8 MPa) that could potentially produce spallings and direct brine releases. Brine flow from the repository to the Culebra Dolomite tended to be small or nonexistent with its occurrence and size also dominated by borehole permeability

  7. GRASP [GRound-Water Adjunct Sensitivity Program]: A computer code to perform post-SWENT [simulator for water, energy, and nuclide transport] adjoint sensitivity analysis of steady-state ground-water flow: Technical report

    International Nuclear Information System (INIS)

    Wilson, J.L.; RamaRao, B.S.; McNeish, J.A.

    1986-11-01

    GRASP (GRound-Water Adjunct Senstivity Program) computes measures of the behavior of a ground-water system and the system's performance for waste isolation, and estimates the sensitivities of these measures to system parameters. The computed measures are referred to as ''performance measures'' and include weighted squared deviations of computed and observed pressures or heads, local Darcy velocity components and magnitudes, boundary fluxes, and travel distance and time along travel paths. The sensitivities are computed by the adjoint method and are exact derivatives of the performance measures with respect to the parameters for the modeled system, taken about the assumed parameter values. GRASP presumes steady-state, saturated grondwater flow, and post-processes the results of a multidimensional (1-D, 2-D, 3-D) finite-difference flow code. This document describes the mathematical basis for the model, the algorithms and solution techniques used, and the computer code design. The implementation of GRASP is verified with simple one- and two-dimensional flow problems, for which analytical expressions of performance measures and sensitivities are derived. The linkage between GRASP and multidimensional finite-difference flow codes is described. This document also contains a detailed user's manual. The use of GRASP to evaluate nuclear waste disposal issues has been emphasized throughout the report. The performance measures and their sensitivities can be employed to assist in directing data collection programs, expedite model calibration, and objectively determine the sensitivity of projected system performance to parameters

  8. Sensitive flow-injection spectrophotometric analysis of bromopride

    Science.gov (United States)

    Lima, Liliane Spazzapam; Weinert, Patrícia Los; Pezza, Leonardo; Pezza, Helena Redigolo

    2014-12-01

    A flow injection spectrophotometric procedure employing merging zones is proposed for direct bromopride determination in pharmaceutical formulations and biological fluids. The proposed method is based on the reaction between bromopride and p-dimethylaminocinnamaldehyde (p-DAC) in acid medium, in the presence of sodium dodecyl sulfate (SDS), resulting in formation of a violet product (λmax = 565 nm). Experimental design methodologies were used to optimize the experimental conditions. The Beer-Lambert law was obeyed in a bromopride concentration range of 3.63 × 10-7 to 2.90 × 10-5 mol L-1, with a correlation coefficient (r) of 0.9999. The limits of detection and quantification were 1.07 × 10-7 and 3.57 × 10-7 mol L-1, respectively. The proposed method was successfully applied to the determination of bromopride in pharmaceuticals and human urine, and recoveries of the drug from these media were in the ranges 99.6-101.2% and 98.6-102.1%, respectively. This new flow injection procedure does not require any sample pretreatment steps.

  9. 4D-MR flow analysis in patients after repair for tetralogy of Fallot

    International Nuclear Information System (INIS)

    Geiger, J.; Markl, M.; Jung, B.; Langer, M.; Grohmann, J.; Stiller, B.; Arnold, R.

    2011-01-01

    Comprehensive analysis of haemodynamics by 3D flow visualisation and retrospective flow quantification in patients after repair of tetralogy of Fallot (TOF). Time-resolved flow-sensitive 4D MRI (spatial resolution ∝ 2.5 mm, temporal resolution = 38.4 ms) was acquired in ten patients after repair of TOF and in four healthy controls. Data analysis included the evaluation of haemodynamics in the aorta, the pulmonary trunk (TP) and left (lPA) and right (rPA) pulmonary arteries by 3D blood flow visualisation using particle traces, and quantitative measurements of flow velocity. 3D visualisation of whole heart haemodynamics provided a comprehensive overview on flow pattern changes in TOF patients, mainly alterations in flow velocity, retrograde flow and pathological vortices. There was consistently higher blood flow in the rPA of the patients (rPA/lPA flow ratio: 2.6 ± 2.5 vs. 1.1 ± 0.1 in controls). Systolic peak velocity in the TP was higher in patients (1.9 m/s ± 0.7 m/s) than controls (0.9 m/s ± 0.1 m/s). 4D flow-sensitive MRI permits the comprehensive evaluation of blood flow characteristics in patients after repair of TOF. Altered flow patterns for different surgical techniques in the small patient cohort may indicate its value for patient monitoring and potentially identifying optimal surgical strategies. (orig.)

  10. 4D-MR flow analysis in patients after repair for tetralogy of Fallot

    Energy Technology Data Exchange (ETDEWEB)

    Geiger, J.; Markl, M.; Jung, B.; Langer, M. [University Hospital Freiburg, Department of Radiology, Medical Physics, Freiburg (Germany); Grohmann, J.; Stiller, B.; Arnold, R. [University Hospital Freiburg, Department of Congenital Heart Disease and Pediatric Cardiology, Freiburg (Germany)

    2011-08-15

    Comprehensive analysis of haemodynamics by 3D flow visualisation and retrospective flow quantification in patients after repair of tetralogy of Fallot (TOF). Time-resolved flow-sensitive 4D MRI (spatial resolution {proportional_to} 2.5 mm, temporal resolution = 38.4 ms) was acquired in ten patients after repair of TOF and in four healthy controls. Data analysis included the evaluation of haemodynamics in the aorta, the pulmonary trunk (TP) and left (lPA) and right (rPA) pulmonary arteries by 3D blood flow visualisation using particle traces, and quantitative measurements of flow velocity. 3D visualisation of whole heart haemodynamics provided a comprehensive overview on flow pattern changes in TOF patients, mainly alterations in flow velocity, retrograde flow and pathological vortices. There was consistently higher blood flow in the rPA of the patients (rPA/lPA flow ratio: 2.6 {+-} 2.5 vs. 1.1 {+-} 0.1 in controls). Systolic peak velocity in the TP was higher in patients (1.9 m/s {+-} 0.7 m/s) than controls (0.9 m/s {+-} 0.1 m/s). 4D flow-sensitive MRI permits the comprehensive evaluation of blood flow characteristics in patients after repair of TOF. Altered flow patterns for different surgical techniques in the small patient cohort may indicate its value for patient monitoring and potentially identifying optimal surgical strategies. (orig.)

  11. Flow chemistry vs. flow analysis.

    Science.gov (United States)

    Trojanowicz, Marek

    2016-01-01

    The flow mode of conducting chemical syntheses facilitates chemical processes through the use of on-line analytical monitoring of occurring reactions, the application of solid-supported reagents to minimize downstream processing and computerized control systems to perform multi-step sequences. They are exactly the same attributes as those of flow analysis, which has solid place in modern analytical chemistry in several last decades. The following review paper, based on 131 references to original papers as well as pre-selected reviews, presents basic aspects, selected instrumental achievements and developmental directions of a rapidly growing field of continuous flow chemical synthesis. Interestingly, many of them might be potentially employed in the development of new methods in flow analysis too. In this paper, examples of application of flow analytical measurements for on-line monitoring of flow syntheses have been indicated and perspectives for a wider application of real-time analytical measurements have been discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Sensitivity analysis using two-dimensional models of the Whiteshell geosphere

    Energy Technology Data Exchange (ETDEWEB)

    Scheier, N. W.; Chan, T.; Stanchell, F. W.

    1992-12-01

    As part of the assessment of the environmental impact of disposing of immobilized nuclear fuel waste in a vault deep within plutonic rock, detailed modelling of groundwater flow, heat transport and containment transport through the geosphere is being performed using the MOTIF finite-element computer code. The first geosphere model is being developed using data from the Whiteshell Research Area, with a hypothetical disposal vault at a depth of 500 m. This report briefly describes the conceptual model and then describes in detail the two-dimensional simulations used to help initially define an adequate three-dimensional representation, select a suitable form for the simplified model to be used in the overall systems assessment with the SYVAC computer code, and perform some sensitivity analysis. The sensitivity analysis considers variations in the rock layer properties, variations in fracture zone configurations, the impact of grouting a vault/fracture zone intersection, and variations in boundary conditions. This study shows that the configuration of major fracture zones can have a major influence on groundwater flow patterns. The flows in the major fracture zones can have high velocities and large volumes. The proximity of the radionuclide source to a major fracture zone may strongly influence the time it takes for a radionuclide to be transported to the surface. (auth)

  13. Data uncertainties in material flow analysis: Municipal solid waste management system in Maputo City, Mozambique.

    Science.gov (United States)

    Dos Muchangos, Leticia Sarmento; Tokai, Akihiro; Hanashima, Atsuko

    2017-01-01

    Material flow analysis can effectively trace and quantify the flows and stocks of materials such as solid wastes in urban environments. However, the integrity of material flow analysis results is compromised by data uncertainties, an occurrence that is particularly acute in low-and-middle-income study contexts. This article investigates the uncertainties in the input data and their effects in a material flow analysis study of municipal solid waste management in Maputo City, the capital of Mozambique. The analysis is based on data collected in 2007 and 2014. Initially, the uncertainties and their ranges were identified by the data classification model of Hedbrant and Sörme, followed by the application of sensitivity analysis. The average lower and upper bounds were 29% and 71%, respectively, in 2007, increasing to 41% and 96%, respectively, in 2014. This indicates higher data quality in 2007 than in 2014. Results also show that not only data are partially missing from the established flows such as waste generation to final disposal, but also that they are limited and inconsistent in emerging flows and processes such as waste generation to material recovery (hence the wider variation in the 2014 parameters). The sensitivity analysis further clarified the most influencing parameter and the degree of influence of each parameter on the waste flows and the interrelations among the parameters. The findings highlight the need for an integrated municipal solid waste management approach to avoid transferring or worsening the negative impacts among the parameters and flows.

  14. Systematic Evaluation of Uncertainty in Material Flow Analysis

    DEFF Research Database (Denmark)

    Laner, David; Rechberger, Helmut; Astrup, Thomas Fruergaard

    2014-01-01

    Material flow analysis (MFA) is a tool to investigate material flows and stocks in defined systems as a basis for resource management or environmental pollution control. Because of the diverse nature of sources and the varying quality and availability of data, MFA results are inherently uncertain....... Uncertainty analyses have received increasing attention in recent MFA studies, but systematic approaches for selection of appropriate uncertainty tools are missing. This article reviews existing literature related to handling of uncertainty in MFA studies and evaluates current practice of uncertainty analysis......) and exploratory MFA (identification of critical parameters and system behavior). Whereas mathematically simpler concepts focusing on data uncertainty characterization are appropriate for descriptive MFAs, statistical approaches enabling more-rigorous evaluation of uncertainty and model sensitivity are needed...

  15. WHAT IF (Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Iulian N. BUJOREANU

    2011-01-01

    Full Text Available Sensitivity analysis represents such a well known and deeply analyzed subject that anyone to enter the field feels like not being able to add anything new. Still, there are so many facets to be taken into consideration.The paper introduces the reader to the various ways sensitivity analysis is implemented and the reasons for which it has to be implemented in most analyses in the decision making processes. Risk analysis is of outmost importance in dealing with resource allocation and is presented at the beginning of the paper as the initial cause to implement sensitivity analysis. Different views and approaches are added during the discussion about sensitivity analysis so that the reader develops an as thoroughly as possible opinion on the use and UTILITY of the sensitivity analysis. Finally, a round-up conclusion brings us to the question of the possibility of generating the future and analyzing it before it unfolds so that, when it happens it brings less uncertainty.

  16. Investment cash flow sensitivity and financing constraints : New evidence from Indian business group firms

    NARCIS (Netherlands)

    George, R.; Kabir, Mohammed Rezaul; Qian, J.

    2011-01-01

    A controversy exists on the use of the investment–cash flow sensitivity as a measure of financing constraints of firms.Were-examine this controversy by analyzing firms affiliated to Indian business groups. We find a strong investment–cash flow sensitivity for both group-affiliated and independent

  17. Signal flow analysis

    CERN Document Server

    Abrahams, J R; Hiller, N

    1965-01-01

    Signal Flow Analysis provides information pertinent to the fundamental aspects of signal flow analysis. This book discusses the basic theory of signal flow graphs and shows their relation to the usual algebraic equations.Organized into seven chapters, this book begins with an overview of properties of a flow graph. This text then demonstrates how flow graphs can be applied to a wide range of electrical circuits that do not involve amplification. Other chapters deal with the parameters as well as circuit applications of transistors. This book discusses as well the variety of circuits using ther

  18. Novel approach based on one-tube nested PCR and a lateral flow strip for highly sensitive diagnosis of tuberculous meningitis.

    Science.gov (United States)

    Sun, Yajuan; Chen, Jiajun; Li, Jia; Xu, Yawei; Jin, Hui; Xu, Na; Yin, Rui; Hu, Guohua

    2017-01-01

    Rapid and sensitive detection of Mycobacterium tuberculosis (M. Tb) in cerebrospinal fluid is crucial in the diagnosis of tuberculous meningitis (TBM), but conventional diagnostic technologies have limited sensitivity and specificity or are time-consuming. In this work, a novel, highly sensitive molecular diagnostic method, one-tube nested PCR-lateral flow strip test (OTNPCR-LFST), was developed for detecting M. tuberculosis. This one-tube nested PCR maintains the sensitivity of conventional two-step nested PCR and reduces both the chance of cross-contamination and the time required for analysis. The PCR product was detected by a lateral flow strip assay, which provided a basis for migration of the test to a point-of-care (POC) microfluidic format. The developed assay had an improved sensitivity compared with traditional PCR, and the limit of detection was up to 1 fg DNA isolated from M. tuberculosis. The assay was also specific for M. tuberculosis, and no cross-reactions were found in other non-target bacteria. The application of this technique to clinical samples was successfully evaluated, and OTNPCR-LFST showed 89% overall sensitivity and 100% specificity for TBM patients. This one-tube nested PCR-lateral flow strip assay is useful for detecting M. tuberculosis in TBM due to its rapidity, high sensitivity and simple manipulation.

  19. Flow analysis of HANARO flow simulated test facility

    International Nuclear Information System (INIS)

    Park, Yong-Chul; Cho, Yeong-Garp; Wu, Jong-Sub; Jun, Byung-Jin

    2002-01-01

    The HANARO, a multi-purpose research reactor of 30 MWth open-tank-in-pool type, has been under normal operation since its initial critical in February, 1995. Many experiments should be safely performed to activate the utilization of the NANARO. A flow simulated test facility is being developed for the endurance test of reactivity control units for extended life times and the verification of structural integrity of those experimental facilities prior to loading in the HANARO. This test facility is composed of three major parts; a half-core structure assembly, flow circulation system and support system. The half-core structure assembly is composed of plenum, grid plate, core channel with flow tubes, chimney and dummy pool. The flow channels are to be filled with flow orifices to simulate core channels. This test facility must simulate similar flow characteristics to the HANARO. This paper, therefore, describes an analytical analysis to study the flow behavior of the test facility. The computational flow analysis has been performed for the verification of flow structure and similarity of this test facility assuming that flow rates and pressure differences of the core channel are constant. The shapes of flow orifices were determined by the trial and error method based on the design requirements of core channel. The computer analysis program with standard k - ε turbulence model was applied to three-dimensional analysis. The results of flow simulation showed a similar flow characteristic with that of the HANARO and satisfied the design requirements of this test facility. The shape of flow orifices used in this numerical simulation can be adapted for manufacturing requirements. The flow rate and the pressure difference through core channel proved by this simulation can be used as the design requirements of the flow system. The analysis results will be verified with the results of the flow test after construction of the flow system. (author)

  20. Gaseous slip flow analysis of a micromachined flow sensor for ultra small flow applications

    Science.gov (United States)

    Jang, Jaesung; Wereley, Steven T.

    2007-02-01

    The velocity slip of a fluid at a wall is one of the most typical phenomena in microscale gas flows. This paper presents a flow analysis considering the velocity slip in a capacitive micro gas flow sensor based on pressure difference measurements along a microchannel. The tangential momentum accommodation coefficient (TMAC) measurements of a particular channel wall in planar microchannels will be presented while the previous micro gas flow studies have been based on the same TMACs on both walls. The sensors consist of a pair of capacitive pressure sensors, inlet/outlet and a microchannel. The main microchannel is 128.0 µm wide, 4.64 µm deep and 5680 µm long, and operated under nearly atmospheric conditions where the outlet Knudsen number is 0.0137. The sensor was fabricated using silicon wet etching, ultrasonic drilling, deep reactive ion etching (DRIE) and anodic bonding. The capacitance change of the sensor and the mass flow rate of nitrogen were measured as the inlet-to-outlet pressure ratio was varied from 1.00 to 1.24. The measured maximum mass flow rate was 3.86 × 10-10 kg s-1 (0.019 sccm) at the highest pressure ratio tested. As the pressure difference increased, both the capacitance of the differential pressure sensor and the flow rate through the main microchannel increased. The laminar friction constant f sdot Re, an important consideration in sensor design, varied from the incompressible no-slip case and the mass sensitivity and resolution of this sensor were discussed. Using the current slip flow formulae, a microchannel with much smaller mass flow rates can be designed at the same pressure ratios.

  1. Sensitivity analysis of machine-learning models of hydrologic time series

    Science.gov (United States)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  2. Interfacing of differential-capacitive biomimetic hair flow-sensors for optimal sensitivity

    International Nuclear Information System (INIS)

    + Research Institute, University of Twente, PO Box 217, 7500 AE Enschede (Netherlands))" data-affiliation=" (Transducers Science and Technology Group, MESA+ Research Institute, University of Twente, PO Box 217, 7500 AE Enschede (Netherlands))" >Dagamseh, A M K; + Research Institute, University of Twente, PO Box 217, 7500 AE Enschede (Netherlands))" data-affiliation=" (Transducers Science and Technology Group, MESA+ Research Institute, University of Twente, PO Box 217, 7500 AE Enschede (Netherlands))" >Bruinink, C M; + Research Institute, University of Twente, PO Box 217, 7500 AE Enschede (Netherlands))" data-affiliation=" (Transducers Science and Technology Group, MESA+ Research Institute, University of Twente, PO Box 217, 7500 AE Enschede (Netherlands))" >Wiegerink, R J; + Research Institute, University of Twente, PO Box 217, 7500 AE Enschede (Netherlands))" data-affiliation=" (Transducers Science and Technology Group, MESA+ Research Institute, University of Twente, PO Box 217, 7500 AE Enschede (Netherlands))" >Lammerink, T S J; + Research Institute, University of Twente, PO Box 217, 7500 AE Enschede (Netherlands))" data-affiliation=" (Transducers Science and Technology Group, MESA+ Research Institute, University of Twente, PO Box 217, 7500 AE Enschede (Netherlands))" >Droogendijk, H; + Research Institute, University of Twente, PO Box 217, 7500 AE Enschede (Netherlands))" data-affiliation=" (Transducers Science and Technology Group, MESA+ Research Institute, University of Twente, PO Box 217, 7500 AE Enschede (Netherlands))" >Krijnen, G J M

    2013-01-01

    Biologically inspired sensor-designs are investigated as a possible path to surpass the performance of more traditionally engineered designs. Inspired by crickets, artificial hair sensors have shown the ability to detect minute flow signals. This paper addresses developments in the design, fabrication, interfacing and characterization of biomimetic hair flow-sensors towards sensitive high-density arrays. Improvement of the electrode design of the hair sensors has resulted in a reduction of the smallest hair movements that can be measured. In comparison to the arrayed hairs-sensor design, the detection-limit was arguably improved at least twelve-fold, down to 1 mm s –1 airflow amplitude at 250 Hz as measured in a bandwidth of 3 kHz. The directivity pattern closely resembles a figure-of-eight. These sensitive hair-sensors open possibilities for high-resolution spatio-temporal flow pattern observations. (paper)

  3. Sensitive analysis of low-flow parameters using the hourly hydrological model for two mountainous basins in Japan

    Science.gov (United States)

    Fujimura, Kazumasa; Iseri, Yoshihiko; Kanae, Shinjiro; Murakami, Masahiro

    2014-05-01

    Accurate estimation of low flow can contribute to better water resources management and also lead to more reliable evaluation of climate change impacts on water resources. In the early study, the nonlinearity of low flow related to the storage in the basin was suggested by Horton (1937) as the exponential function of Q=KSN, where Q is the discharge, S is the storage, K is a constant and N is the exponent value. In the recent study by Ding (2011) showed the general storage-discharge equation of Q = KNSN. Since the constant K is defined as the fractional recession constant and symbolized as Au by Ando et al. (1983), in this study, we rewrite this equation as Qg=AuNSgN, where Qg is the groundwater runoff and Sg is the groundwater storage. Although this equation was applied to a short-term runoff event of less than 14 hours using the unit hydrograph method by Ding, it was not yet applied for a long-term runoff event including low flow more than 10 years. This study performed a sensitive analysis of two parameters of the constant Au and exponent value N by using the hourly hydrological model for two mountainous basins in Japan. The hourly hydrological model used in this study was presented by Fujimura et al. (2012), which comprise the Diskin-Nazimov infiltration model, groundwater recharge and groundwater runoff calculations, and a direct runoff component. The study basins are the Sameura Dam basin (SAME basin) (472 km2) located in the western Japan which has variability of rainfall, and the Shirakawa Dam basin (SIRA basin) (205km2) located in a region of heavy snowfall in the eastern Japan, that are different conditions of climate and geology. The period of available hourly data for the SAME basin is 20 years from 1 January 1991 to 31 December 2010, and for the SIRA basin is 10 years from 1 October 2003 to 30 September 2013. In the sensitive analysis, we prepared 19900 sets of the two parameters of Au and N, the Au value ranges from 0.0001 to 0.0100 in steps of 0

  4. Quasi-laminar stability and sensitivity analyses for turbulent flows: Prediction of low-frequency unsteadiness and passive control

    Science.gov (United States)

    Mettot, Clément; Sipp, Denis; Bézard, Hervé

    2014-04-01

    This article presents a quasi-laminar stability approach to identify in high-Reynolds number flows the dominant low-frequencies and to design passive control means to shift these frequencies. The approach is based on a global linear stability analysis of mean-flows, which correspond to the time-average of the unsteady flows. Contrary to the previous work by Meliga et al. ["Sensitivity of 2-D turbulent flow past a D-shaped cylinder using global stability," Phys. Fluids 24, 061701 (2012)], we use the linearized Navier-Stokes equations based solely on the molecular viscosity (leaving aside any turbulence model and any eddy viscosity) to extract the least stable direct and adjoint global modes of the flow. Then, we compute the frequency sensitivity maps of these modes, so as to predict before hand where a small control cylinder optimally shifts the frequency of the flow. In the case of the D-shaped cylinder studied by Parezanović and Cadot [J. Fluid Mech. 693, 115 (2012)], we show that the present approach well captures the frequency of the flow and recovers accurately the frequency control maps obtained experimentally. The results are close to those already obtained by Meliga et al., who used a more complex approach in which turbulence models played a central role. The present approach is simpler and may be applied to a broader range of flows since it is tractable as soon as mean-flows — which can be obtained either numerically from simulations (Direct Numerical Simulation (DNS), Large Eddy Simulation (LES), unsteady Reynolds-Averaged-Navier-Stokes (RANS), steady RANS) or from experimental measurements (Particle Image Velocimetry - PIV) — are available. We also discuss how the influence of the control cylinder on the mean-flow may be more accurately predicted by determining an eddy-viscosity from numerical simulations or experimental measurements. From a technical point of view, we finally show how an existing compressible numerical simulation code may be used in

  5. Sensitivity functions for uncertainty analysis: Sensitivity and uncertainty analysis of reactor performance parameters

    International Nuclear Information System (INIS)

    Greenspan, E.

    1982-01-01

    This chapter presents the mathematical basis for sensitivity functions, discusses their physical meaning and information they contain, and clarifies a number of issues concerning their application, including the definition of group sensitivities, the selection of sensitivity functions to be included in the analysis, and limitations of sensitivity theory. Examines the theoretical foundation; criticality reset sensitivities; group sensitivities and uncertainties; selection of sensitivities included in the analysis; and other uses and limitations of sensitivity functions. Gives the theoretical formulation of sensitivity functions pertaining to ''as-built'' designs for performance parameters of the form of ratios of linear flux functionals (such as reaction-rate ratios), linear adjoint functionals, bilinear functions (such as reactivity worth ratios), and for reactor reactivity. Offers a consistent procedure for reducing energy-dependent or fine-group sensitivities and uncertainties to broad group sensitivities and uncertainties. Provides illustrations of sensitivity functions as well as references to available compilations of such functions and of total sensitivities. Indicates limitations of sensitivity theory originating from the fact that this theory is based on a first-order perturbation theory

  6. Identification of contact and respiratory sensitizers using flow cytometry

    International Nuclear Information System (INIS)

    Goutet, Michele; Pepin, Elsa; Langonne, Isabelle; Huguet, Nelly; Ban, Masarin

    2005-01-01

    Identification of the chemicals responsible for respiratory and contact allergies in the industrial area is an important occupational safety issue. This study was conducted in mice to determine whether flow cytometry is an appropriate method to analyze and differentiate the specific immune responses to the respiratory sensitizer trimellitic anhydride (TMA) and to the contact sensitizer dinitrochlorobenzene (DNCB) used at concentrations with comparable immunogenic potential. Mice were exposed twice on the flanks (days 0, 5) to 10% TMA or 1% DNCB and challenged three times on the ears (days 10, 11, 12) with 2.5% TMA or 0.25% DNCB. Flow cytometry analyses were conducted on draining lymph node cells harvested on days 13 and 18. Comparing TMA and DNCB immune responses on day 13, we found obvious differences that persisted for most of them on day 18. An increased proportion of IgE+ cells correlated to total serum IgE level and an enhancement of MHC II molecule expression were observed in the lymph node B lymphocytes from TMA-treated mice. The percentage of IL-4-producing CD4+ lymphocytes and the IL-4 receptor expression were clearly higher following TMA exposure. In contrast, higher proportions of IL-2-producing cells were detected in CD4+ and CD8+ cells from DNCB-treated mice. Both chemicals induced a significant increase in the percentage of IFN-γ-producing cells among CD8+ lymphocytes but to a greater proportion following TMA treatment. In conclusion, this study encourages the use of flow cytometry to discriminate between contact and respiratory sensitizers by identifying divergent expression of immune response parameters

  7. The source of investment cash flow sensitivity in manufacturing firms: Is it asymmetric information or agency costs?

    Directory of Open Access Journals (Sweden)

    Daniel Makina

    2016-09-01

    Full Text Available In the literature, positive investment cash flow sensitivity is attributed to either asymmetric information induced financing constraints or the agency costs of free cash flow. Using data from a sample of 68 manufacturing firms listed on the South African JSE, this paper contributes to the literature by investigating the source of investment cash flow sensitivity. We have found that asymmetric information explains the positive investment cash flow sensitivity better than agency costs. Furthermore, asymmetric information has been observed to be more pronounced in low-dividend-paying firms and small firms. Despite South Africa’s having a developed financial system by international standards, small firms are seen to be financially constrained. We attribute the absence of investment cash flow sensitivity due to agency costs to good corporate governance of South African listed firms. Thus the paper provides further evidence in support of the proposition in the literature that the source of investment cash flow sensitivity may depend on the institutional setting of a country, such as its corporate governance.

  8. Assessment of intracardiac flow and vorticity in the right heart of patients after repair of tetralogy of Fallot by flow-sensitive 4D MRI.

    Science.gov (United States)

    Hirtler, Daniel; Garcia, Julio; Barker, Alex J; Geiger, Julia

    2016-10-01

    To comprehensively and quantitatively analyse flow and vorticity in the right heart of patients after repair of tetralogy of Fallot (rTOF) compared with healthy volunteers. Time-resolved flow-sensitive 4D MRI was acquired in 24 rTOF patients and 12 volunteers. Qualitative flow evaluation was based on consensus reading of two observers. Quantitative analysis included segmentation of the right atrium (RA) and ventricle (RV) in a four-chamber view to extract volumes and regional haemodynamic information for computation of regional mean and peak vorticity. Right heart intra-atrial, intraventricular and outflow tract flow patterns differed considerably between rTOF patients and volunteers. Peak RA and mean RV vorticity was significantly higher in patients (p = 0.02/0.05). Significant negative correlations were found between patients' maximum and mean RV and RA vorticity and ventricular volumes (p tetralogy of Fallot. • Regurgitant flow in the main pulmonary artery is associated with higher right heart vorticity.

  9. Pressure-sensitive paint on a truncated cone in hypersonic flow at incidences

    International Nuclear Information System (INIS)

    Yang, L.; Erdem, E.; Zare-Behtash, H.; Kontis, K.; Saravanan, S.

    2012-01-01

    Highlights: ► Global pressure map over the truncated cone is obtained at various incidence angles in Mach 5 flow. ► Successful application of AA-PSP in hypersonic flow expands operation area of this technique. ► AA-PSP reveals complex three-dimensional pattern which is difficult for transducer to obtain. ► Quantitative data provides strong correlation with colour Schlieren and oil flow results. ► High spatial resolution pressure mappings identify small scale vortices and flow separation. - Abstract: The flow over a truncated cone is a classical and fundamental problem for aerodynamic research due to its three-dimensional and complicated characteristics. The flow is made more complex when examining high angles of incidence. Recently these types of flows have drawn more attention for the purposes of drag reduction in supersonic/hypersonic flows. In the present study the flow over a truncated cone at various incidences was experimentally investigated in a Mach 5 flow with a unit Reynolds number of 13.5 × 10 6 m −1 . The cone semi-apex angle is 15° and the truncation ratio (truncated length/cone length) is 0.5. The incidence of the model varied from −12° to 12° with 3° intervals relative to the freestream direction. The external flow around the truncated cone was visualised by colour Schlieren photography, while the surface flow pattern was revealed using the oil flow method. The surface pressure distribution was measured using the anodized aluminium pressure-sensitive paint (AA-PSP) technique. Both top and sideviews of the pressure distribution on the model surface were acquired at various incidences. AA-PSP showed high pressure sensitivity and captured the complicated flow structures which correlated well with the colour Schlieren and oil flow visualisation results.

  10. Data-flow Analysis of Programs with Associative Arrays

    Directory of Open Access Journals (Sweden)

    David Hauzar

    2014-05-01

    Full Text Available Dynamic programming languages, such as PHP, JavaScript, and Python, provide built-in data structures including associative arrays and objects with similar semantics—object properties can be created at run-time and accessed via arbitrary expressions. While a high level of security and safety of applications written in these languages can be of a particular importance (consider a web application storing sensitive data and providing its functionality worldwide, dynamic data structures pose significant challenges for data-flow analysis making traditional static verification methods both unsound and imprecise. In this paper, we propose a sound and precise approach for value and points-to analysis of programs with associative arrays-like data structures, upon which data-flow analyses can be built. We implemented our approach in a web-application domain—in an analyzer of PHP code.

  11. Optimized and validated flow-injection spectrophotometric analysis of topiramate, piracetam and levetiracetam in pharmaceutical formulations.

    Science.gov (United States)

    Hadad, Ghada M; Abdel-Salam, Randa A; Emara, Samy

    2011-12-01

    Application of a sensitive and rapid flow injection analysis (FIA) method for determination of topiramate, piracetam, and levetiracetam in pharmaceutical formulations has been investigated. The method is based on the reaction with ortho-phtalaldehyde and 2-mercaptoethanol in a basic buffer and measurement of absorbance at 295 nm under flow conditions. Variables affecting the determination such as sample injection volume, pH, ionic strength, reagent concentrations, flow rate of reagent and other FIA parameters were optimized to produce the most sensitive and reproducible results using a quarter-fraction factorial design, for five factors at two levels. Also, the method has been optimized and fully validated in terms of linearity and range, limit of detection and quantitation, precision, selectivity and accuracy. The method was successfully applied to the analysis of pharmaceutical preparations.

  12. Sensitivity analysis for near-surface disposal in argillaceous media using NAMMU-HYROCOIN Level 3-Test case 1

    International Nuclear Information System (INIS)

    Miller, D.R.; Paige, R.W.

    1988-07-01

    HYDROCOIN is an international project for comparing groundwater flow models and modelling strategies. Level 3 of the project concerns the application of groundwater flow models to repository performance assessment with emphasis on the treatment of sensitivity and uncertainty in models and data. Level 3, test case 1 concerns sensitivity analysis of the groundwater flow around a radioactive waste repository situated in a near surface argillaceous formation. Work on this test case has been carried out by Harwell and will be reported in full in the near future. This report presents the results obtained using the computer program NAMMU. (author)

  13. Validation of diffuse correlation spectroscopy sensitivity to nicotinamide-induced blood flow elevation in the murine hindlimb using the fluorescent microsphere technique

    Science.gov (United States)

    Proctor, Ashley R.; Ramirez, Gabriel A.; Han, Songfeng; Liu, Ziping; Bubel, Tracy M.; Choe, Regine

    2018-03-01

    Nicotinamide has been shown to affect blood flow in both tumor and normal tissues, including skeletal muscle. Intraperitoneal injection of nicotinamide was used as a simple intervention to test the sensitivity of noninvasive diffuse correlation spectroscopy (DCS) to changes in blood flow in the murine left quadriceps femoris skeletal muscle. DCS was then compared with the gold-standard fluorescent microsphere (FM) technique for validation. The nicotinamide dose-response experiment showed that relative blood flow measured by DCS increased following treatment with 500- and 1000-mg / kg nicotinamide. The DCS and FM technique comparison showed that blood flow index measured by DCS was correlated with FM counts quantified by image analysis. The results of this study show that DCS is sensitive to nicotinamide-induced blood flow elevation in the murine left quadriceps femoris. Additionally, the results of the comparison were consistent with similar studies in higher-order animal models, suggesting that mouse models can be effectively employed to investigate the utility of DCS for various blood flow measurement applications.

  14. Gene flow analysis method, the D-statistic, is robust in a wide parameter space.

    Science.gov (United States)

    Zheng, Yichen; Janke, Axel

    2018-01-08

    We evaluated the sensitivity of the D-statistic, a parsimony-like method widely used to detect gene flow between closely related species. This method has been applied to a variety of taxa with a wide range of divergence times. However, its parameter space and thus its applicability to a wide taxonomic range has not been systematically studied. Divergence time, population size, time of gene flow, distance of outgroup and number of loci were examined in a sensitivity analysis. The sensitivity study shows that the primary determinant of the D-statistic is the relative population size, i.e. the population size scaled by the number of generations since divergence. This is consistent with the fact that the main confounding factor in gene flow detection is incomplete lineage sorting by diluting the signal. The sensitivity of the D-statistic is also affected by the direction of gene flow, size and number of loci. In addition, we examined the ability of the f-statistics, [Formula: see text] and [Formula: see text], to estimate the fraction of a genome affected by gene flow; while these statistics are difficult to implement to practical questions in biology due to lack of knowledge of when the gene flow happened, they can be used to compare datasets with identical or similar demographic background. The D-statistic, as a method to detect gene flow, is robust against a wide range of genetic distances (divergence times) but it is sensitive to population size. The D-statistic should only be applied with critical reservation to taxa where population sizes are large relative to branch lengths in generations.

  15. Sensitivity analysis of a light gas oil deep hydrodesulfurization process via catalytic distillation

    Energy Technology Data Exchange (ETDEWEB)

    Rosales-Quintero, A.; Vargas-Villamil, F.D. [Prog. de Matematicas Aplicadas y Computacion, Instituto Mexicano del Petroleo, Eje Central Lazaro Cardenas 152, Mexico, D.F. 07330 (Mexico); Arce-Medina, E. [Instituto Politecnico Nacional, ESIQIE, Ed. 8 Col. Lindavista, Mexico, D.F. 07738 (Mexico)

    2008-01-30

    In this work, a sensitivity analysis of a light gas oil deep hydrodesulfurization catalytic distillation column is presented. The aim is to evaluate the effects of various parameters and operating conditions on the organic sulfur compound elimination by using a realistic light gas oil fraction. The hydrocarbons are modeled using pseudocompounds, while the organic sulfur compounds are modeled using model compounds, i.e., dibenzothiophene (DBT) and 4,6-dimethyl dibenzothiophene (4,6-DMDBT). These are among the most refractive sulfur compounds present in the oil fractions. A sensitivity analysis is discussed for the reflux ratio, bottom flow rate, condenser temperature, hydrogen and gas oil feed stages, catalyst loading, the reactive, stripping, and rectifying stages, feed disturbances, and multiple feeds. The results give insight into the qualitative effect of some of the operating variables and disturbances on organic sulfur elimination. In addition, they show that special attention must be given to the bottom flow rate and LGO feed rate control. (author)

  16. Hidden flows and waste processing--an analysis of illustrative futures.

    Science.gov (United States)

    Schiller, F; Raffield, T; Angus, A; Herben, M; Young, P J; Longhurst, P J; Pollard, S J T

    2010-12-14

    An existing materials flow model is adapted (using Excel and AMBER model platforms) to account for waste and hidden material flows within a domestic environment. Supported by national waste data, the implications of legislative change, domestic resource depletion and waste technology advances are explored. The revised methodology offers additional functionality for economic parameters that influence waste generation and disposal. We explore this accounting system under hypothetical future waste and resource management scenarios, illustrating the utility of the model. A sensitivity analysis confirms that imports, domestic extraction and their associated hidden flows impact mostly on waste generation. The model offers enhanced utility for policy and decision makers with regard to economic mass balance and strategic waste flows, and may promote further discussion about waste technology choice in the context of reducing carbon budgets.

  17. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Investment cash flow sensitivity under managerial optimism: new evidence from NYSE panel data firms

    OpenAIRE

    Mohamed, Ezzeddine Ben; Fairchild, Richard; Bouri, Abdelfettah

    2014-01-01

    Investment cash flow sensitivity constitutes one important block of the corporate financial literature. While it is well documented in standard corporate finance, it is still young under behavioral corporate finance. In this paper, we test the investment cash flow sensitivity among panel data of American industrial firms during 1999-2010. Using Q-model of investment (Tobin, 1969), we construct and introduce a proxy of managerial optimism following Malmendier and Tate (2005a) to show the impac...

  19. Sensitivity analysis of the noble gas transport and fate model: CASCADR9

    International Nuclear Information System (INIS)

    Lindstrom, F.T.; Cawlfield, D.E.; Barker, L.E.

    1994-03-01

    CASCADR9 is a desert alluvial soil site-specific noble gas transport and fate model. Input parameters for CASCADR9 are: man-made source term, background concentration of radionuclides, radon half-life, soil porosity, period of barometric pressure wave, amplitude of barometric pressure wave, and effective eddy diffusivity. Using average flux, total flow, and radon concentration at the 40 day mark as output parameters, a sensitivity analysis for CASCADR9 is carried out, under a variety of scenarios. For each scenario, the parameter to which output parameters are most sensitive are identified

  20. Fast and sensitive trace analysis of malachite green using a surface-enhanced Raman microfluidic sensor.

    Science.gov (United States)

    Lee, Sangyeop; Choi, Junghyun; Chen, Lingxin; Park, Byungchoon; Kyong, Jin Burm; Seong, Gi Hun; Choo, Jaebum; Lee, Yeonjung; Shin, Kyung-Hoon; Lee, Eun Kyu; Joo, Sang-Woo; Lee, Kyeong-Hee

    2007-05-08

    A rapid and highly sensitive trace analysis technique for determining malachite green (MG) in a polydimethylsiloxane (PDMS) microfluidic sensor was investigated using surface-enhanced Raman spectroscopy (SERS). A zigzag-shaped PDMS microfluidic channel was fabricated for efficient mixing between MG analytes and aggregated silver colloids. Under the optimal condition of flow velocity, MG molecules were effectively adsorbed onto silver nanoparticles while flowing along the upper and lower zigzag-shaped PDMS channel. A quantitative analysis of MG was performed based on the measured peak height at 1615 cm(-1) in its SERS spectrum. The limit of detection, using the SERS microfluidic sensor, was found to be below the 1-2 ppb level and this low detection limit is comparable to the result of the LC-Mass detection method. In the present study, we introduce a new conceptual detection technology, using a SERS microfluidic sensor, for the highly sensitive trace analysis of MG in water.

  1. Self-organized natural roads for predicting traffic flow: a sensitivity study

    International Nuclear Information System (INIS)

    Jiang, Bin; Zhao, Sijian; Yin, Junjun

    2008-01-01

    In this paper, we extended road-based topological analysis to both nationwide and urban road networks, and concentrated on a sensitivity study with respect to the formation of self-organized natural roads based on the Gestalt principle of good continuity. Both annual average daily traffic (AADT) and global positioning system (GPS) data were used to correlate with a series of ranking metrics including five centrality-based metrics and two PageRank metrics. It was found that there exists a tipping point from segment-based to road-based network topology in terms of correlation between ranking metrics and their traffic. To our great surprise, (1) this correlation is significantly improved if a selfish rather than utopian strategy is adopted in forming the self-organized natural roads, and (2) point-based metrics assigned by summation into individual roads tend to have a much better correlation with traffic flow than line-based metrics. These counter-intuitive surprising findings constitute emergent properties of self-organized natural roads, which are intelligent enough for predicting traffic flow, thus shedding substantial light on the understanding of road networks and their traffic from the perspective of complex networks

  2. Field-sensitivity To Rheological Parameters

    Science.gov (United States)

    Freund, Jonathan; Ewoldt, Randy

    2017-11-01

    We ask this question: where in a flow is a quantity of interest Q quantitatively sensitive to the model parameters θ-> describing the rheology of the fluid? This field sensitivity is computed via the numerical solution of the adjoint flow equations, as developed to expose the target sensitivity δQ / δθ-> (x) via the constraint of satisfying the flow equations. Our primary example is a sphere settling in Carbopol, for which we have experimental data. For this Carreau-model configuration, we simultaneously calculate how much a local change in the fluid intrinsic time-scale λ, limit-viscosities ηo and η∞, and exponent n would affect the drag D. Such field sensitivities can show where different fluid physics in the model (time scales, elastic versus viscous components, etc.) are important for the target observable and generally guide model refinement based on predictive goals. In this case, the computational cost of solving the local sensitivity problem is negligible relative to the flow. The Carreau-fluid/sphere example is illustrative; the utility of field sensitivity is in the design and analysis of less intuitive flows, for which we provide some additional examples.

  3. MOVES regional level sensitivity analysis

    Science.gov (United States)

    2012-01-01

    The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...

  4. Shear layer flame stabilization sensitivities in a swirling flow

    Directory of Open Access Journals (Sweden)

    Christopher Foley

    2017-03-01

    Full Text Available A variety of different flame configurations and heat release distributions exist in high swirl, annular flows, due to the existence of inner and outer shear layers as well a vortex breakdown bubble. Each of these different configurations, in turn, has different thermoacoustic sensitivities and influences on combustor emissions, nozzle durability, and liner heating. This paper presents findings on the sensitivities of the outer shear layer- stabilized flames to a range of parameters, including equivalence ratio, bulkhead temperature, flow velocity, and preheat temperature. There is significant hysteresis for flame attachment/detachment from the outer shear layer and this hysteresis is also described. Results are also correlated with extinction stretch rate calculations based on detailed kinetic simulations. In addition, we show that the bulkhead temperature near the flame attachment point has significant impact on outer shear layer detachment. This indicates that understanding the heat transfer between the edge flame stabilized in the shear layer and the nozzle hardware is needed in order to predict shear layer flame stabilization limits. Moreover, it shows that simulations cannot simply assume adiabatic boundary conditions if they are to capture these transitions. We also show that the reference temperature for correlating these transitions is quite different for attachment and local blow off. Finally, these results highlight the deficiencies in current understanding of the influence of fluid mechanic parameters (e.g. velocity, swirl number on shear layer flame attachment. For example, they show that the seemingly simple matter of scaling flame transition points with changes in flow velocities is not understood.

  5. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  6. Evaluations of the CCFL and critical flow models in TRACE for PWR LBLOCA analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jung-Hua; Lin, Hao Tzu [National Tsing Hua Univ., HsinChu, Taiwan (China). Dept. of Engineering and System Science; Wang, Jong-Rong [Atomic Energy Council, Taoyuan County, Taiwan (China). Inst. of Nuclear Energy Research; Shih, Chunkuan [National Tsing Hua Univ., HsinChu, Taiwan (China). Inst. of Nuclear Engineering and Science

    2012-12-15

    This study aims to develop the Maanshan Pressurized Water Reactor (PWR) analysis model by using the TRACE (TRAC/RELAP Advanced Computational Engine) code. By analyzing the Large Break Loss of Coolant Accident (LBLOCA) sequence, the results are compared with the Maanshan Final Safety Analysis Report (FSAR) data. The critical flow and Counter Current Flow Limitation (CCFL) play an important role in the overall performance of TRACE LBLOCA prediction. Therefore, the sensitivity study on the discharge coefficients of critical flow model and CCFL modeling among different regions are also discussed. The current conclusions show that modeling CCFL in downcomer has more significant impact on the peak cladding temperature than modeling CCFL in hot-legs does. No CCFL phenomena occurred in the pressurizer surge line. The best value for the multipliers of critical flow model would be 0.5 and the TRACE could consistently predict the break flow rate in the LBLOCA analysis as shown in FSAR. (orig.)

  7. High PRF ultrafast sliding compound doppler imaging: fully qualitative and quantitative analysis of blood flow

    Science.gov (United States)

    Kang, Jinbum; Jang, Won Seuk; Yoo, Yangmo

    2018-02-01

    Ultrafast compound Doppler imaging based on plane-wave excitation (UCDI) can be used to evaluate cardiovascular diseases using high frame rates. In particular, it provides a fully quantifiable flow analysis over a large region of interest with high spatio-temporal resolution. However, the pulse-repetition frequency (PRF) in the UCDI method is limited for high-velocity flow imaging since it has a tradeoff between the number of plane-wave angles (N) and acquisition time. In this paper, we present high PRF ultrafast sliding compound Doppler imaging method (HUSDI) to improve quantitative flow analysis. With the HUSDI method, full scanline images (i.e. each tilted plane wave data) in a Doppler frame buffer are consecutively summed using a sliding window to create high-quality ensemble data so that there is no reduction in frame rate and flow sensitivity. In addition, by updating a new compounding set with a certain time difference (i.e. sliding window step size or L), the HUSDI method allows various Doppler PRFs with the same acquisition data to enable a fully qualitative, retrospective flow assessment. To evaluate the performance of the proposed HUSDI method, simulation, in vitro and in vivo studies were conducted under diverse flow circumstances. In the simulation and in vitro studies, the HUSDI method showed improved hemodynamic representations without reducing either temporal resolution or sensitivity compared to the UCDI method. For the quantitative analysis, the root mean squared velocity error (RMSVE) was measured using 9 angles (-12° to 12°) with L of 1-9, and the results were found to be comparable to those of the UCDI method (L  =  N  =  9), i.e.  ⩽0.24 cm s-1, for all L values. For the in vivo study, the flow data acquired from a full cardiac cycle of the femoral vessels of a healthy volunteer were analyzed using a PW spectrogram, and arterial and venous flows were successfully assessed with high Doppler PRF (e.g. 5 kHz at L

  8. High PRF ultrafast sliding compound doppler imaging: fully qualitative and quantitative analysis of blood flow.

    Science.gov (United States)

    Kang, Jinbum; Jang, Won Seuk; Yoo, Yangmo

    2018-02-09

    Ultrafast compound Doppler imaging based on plane-wave excitation (UCDI) can be used to evaluate cardiovascular diseases using high frame rates. In particular, it provides a fully quantifiable flow analysis over a large region of interest with high spatio-temporal resolution. However, the pulse-repetition frequency (PRF) in the UCDI method is limited for high-velocity flow imaging since it has a tradeoff between the number of plane-wave angles (N) and acquisition time. In this paper, we present high PRF ultrafast sliding compound Doppler imaging method (HUSDI) to improve quantitative flow analysis. With the HUSDI method, full scanline images (i.e. each tilted plane wave data) in a Doppler frame buffer are consecutively summed using a sliding window to create high-quality ensemble data so that there is no reduction in frame rate and flow sensitivity. In addition, by updating a new compounding set with a certain time difference (i.e. sliding window step size or L), the HUSDI method allows various Doppler PRFs with the same acquisition data to enable a fully qualitative, retrospective flow assessment. To evaluate the performance of the proposed HUSDI method, simulation, in vitro and in vivo studies were conducted under diverse flow circumstances. In the simulation and in vitro studies, the HUSDI method showed improved hemodynamic representations without reducing either temporal resolution or sensitivity compared to the UCDI method. For the quantitative analysis, the root mean squared velocity error (RMSVE) was measured using 9 angles (-12° to 12°) with L of 1-9, and the results were found to be comparable to those of the UCDI method (L  =  N  =  9), i.e.  ⩽0.24 cm s -1 , for all L values. For the in vivo study, the flow data acquired from a full cardiac cycle of the femoral vessels of a healthy volunteer were analyzed using a PW spectrogram, and arterial and venous flows were successfully assessed with high Doppler PRF (e.g. 5 kHz at L

  9. Is Investment-Cash Flow Sensitivity Caused by the Agency Costs or Asymmetric Information? Evidence from the UK

    NARCIS (Netherlands)

    Pawlina, G.; Renneboog, L.D.R.

    2005-01-01

    We investigate the investment-cash flow sensitivity of a large sample of the UK listed firms and confirm that investment is strongly cash flow-sensitive.Is this suboptimal investment policy the result of agency problems when managers with high discretion overinvest, or of asymmetric information when

  10. A reactive transport model for mercury fate in contaminated soil--sensitivity analysis.

    Science.gov (United States)

    Leterme, Bertrand; Jacques, Diederik

    2015-11-01

    We present a sensitivity analysis of a reactive transport model of mercury (Hg) fate in contaminated soil systems. The one-dimensional model, presented in Leterme et al. (2014), couples water flow in variably saturated conditions with Hg physico-chemical reactions. The sensitivity of Hg leaching and volatilisation to parameter uncertainty is examined using the elementary effect method. A test case is built using a hypothetical 1-m depth sandy soil and a 50-year time series of daily precipitation and evapotranspiration. Hg anthropogenic contamination is simulated in the topsoil by separately considering three different sources: cinnabar, non-aqueous phase liquid and aqueous mercuric chloride. The model sensitivity to a set of 13 input parameters is assessed, using three different model outputs (volatilized Hg, leached Hg, Hg still present in the contaminated soil horizon). Results show that dissolved organic matter (DOM) concentration in soil solution and the binding constant to DOM thiol groups are critical parameters, as well as parameters related to Hg sorption to humic and fulvic acids in solid organic matter. Initial Hg concentration is also identified as a sensitive parameter. The sensitivity analysis also brings out non-monotonic model behaviour for certain parameters.

  11. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  12. Sensitivity Analysis Without Assumptions.

    Science.gov (United States)

    Ding, Peng; VanderWeele, Tyler J

    2016-05-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.

  13. Sensitivity analysis of critical experiment with direct perturbation compared to TSUNAMI-3D sensitivity analysis

    International Nuclear Information System (INIS)

    Barber, A. D.; Busch, R.

    2009-01-01

    The goal of this work is to obtain sensitivities from direct uncertainty analysis calculation and correlate those calculated values with the sensitivities produced from TSUNAMI-3D (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation in Three Dimensions). A full sensitivity analysis is performed on a critical experiment to determine the overall uncertainty of the experiment. Small perturbation calculations are performed for all known uncertainties to obtain the total uncertainty of the experiment. The results from a critical experiment are only known as well as the geometric and material properties. The goal of this relationship is to simplify the uncertainty quantification process in assessing a critical experiment, while still considering all of the important parameters. (authors)

  14. Sensitivity analysis in multi-parameter probabilistic systems

    International Nuclear Information System (INIS)

    Walker, J.R.

    1987-01-01

    Probabilistic methods involving the use of multi-parameter Monte Carlo analysis can be applied to a wide range of engineering systems. The output from the Monte Carlo analysis is a probabilistic estimate of the system consequence, which can vary spatially and temporally. Sensitivity analysis aims to examine how the output consequence is influenced by the input parameter values. Sensitivity analysis provides the necessary information so that the engineering properties of the system can be optimized. This report details a package of sensitivity analysis techniques that together form an integrated methodology for the sensitivity analysis of probabilistic systems. The techniques have known confidence limits and can be applied to a wide range of engineering problems. The sensitivity analysis methodology is illustrated by performing the sensitivity analysis of the MCROC rock microcracking model

  15. Assessment of intracardiac flow and vorticity in the right heart of patients after repair of tetralogy of Fallot by flow-sensitive 4D MRI

    Energy Technology Data Exchange (ETDEWEB)

    Hirtler, Daniel [University Hospital Freiburg, Department of Congenital Heart Defects and Pediatric Cardiology (Heart Center, University of Freiburg), Freiburg (Germany); Garcia, Julio; Barker, Alex J. [Northwestern University Feinberg School of Medicine, Department of Radiology, Chicago, IL (United States); Geiger, Julia [University Childrens' Hospital Zurich, Department of Radiology, Zurich (Switzerland)

    2016-10-15

    To comprehensively and quantitatively analyse flow and vorticity in the right heart of patients after repair of tetralogy of Fallot (rTOF) compared with healthy volunteers. Time-resolved flow-sensitive 4D MRI was acquired in 24 rTOF patients and 12 volunteers. Qualitative flow evaluation was based on consensus reading of two observers. Quantitative analysis included segmentation of the right atrium (RA) and ventricle (RV) in a four-chamber view to extract volumes and regional haemodynamic information for computation of regional mean and peak vorticity. Right heart intra-atrial, intraventricular and outflow tract flow patterns differed considerably between rTOF patients and volunteers. Peak RA and mean RV vorticity was significantly higher in patients (p = 0.02/0.05). Significant negative correlations were found between patients' maximum and mean RV and RA vorticity and ventricular volumes (p < 0.05). The main pulmonary artery (MPA) regurgitant flow was associated with higher RA and RV vorticity, which was significant for RA maximum and RV mean vorticity (p = 0.01/0.03). The calculation of vorticity based on 4D flow data is an alternative approach to assess intracardiac flow changes in rTOF patients compared with qualitative flow visualization. Alterations in intracardiac vorticity could be relevant with regard to the development of RV dilation and impaired function. (orig.)

  16. Assessment of intracardiac flow and vorticity in the right heart of patients after repair of tetralogy of Fallot by flow-sensitive 4D MRI

    International Nuclear Information System (INIS)

    Hirtler, Daniel; Garcia, Julio; Barker, Alex J.; Geiger, Julia

    2016-01-01

    To comprehensively and quantitatively analyse flow and vorticity in the right heart of patients after repair of tetralogy of Fallot (rTOF) compared with healthy volunteers. Time-resolved flow-sensitive 4D MRI was acquired in 24 rTOF patients and 12 volunteers. Qualitative flow evaluation was based on consensus reading of two observers. Quantitative analysis included segmentation of the right atrium (RA) and ventricle (RV) in a four-chamber view to extract volumes and regional haemodynamic information for computation of regional mean and peak vorticity. Right heart intra-atrial, intraventricular and outflow tract flow patterns differed considerably between rTOF patients and volunteers. Peak RA and mean RV vorticity was significantly higher in patients (p = 0.02/0.05). Significant negative correlations were found between patients' maximum and mean RV and RA vorticity and ventricular volumes (p < 0.05). The main pulmonary artery (MPA) regurgitant flow was associated with higher RA and RV vorticity, which was significant for RA maximum and RV mean vorticity (p = 0.01/0.03). The calculation of vorticity based on 4D flow data is an alternative approach to assess intracardiac flow changes in rTOF patients compared with qualitative flow visualization. Alterations in intracardiac vorticity could be relevant with regard to the development of RV dilation and impaired function. (orig.)

  17. Global optimization and sensitivity analysis

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1990-01-01

    A new direction for the analysis of nonlinear models of nuclear systems is suggested to overcome fundamental limitations of sensitivity analysis and optimization methods currently prevalent in nuclear engineering usage. This direction is toward a global analysis of the behavior of the respective system as its design parameters are allowed to vary over their respective design ranges. Presented is a methodology for global analysis that unifies and extends the current scopes of sensitivity analysis and optimization by identifying all the critical points (maxima, minima) and solution bifurcation points together with corresponding sensitivities at any design point of interest. The potential applicability of this methodology is illustrated with test problems involving multiple critical points and bifurcations and comprising both equality and inequality constraints

  18. Sensitivity study of CFD turbulent models for natural convection analysis

    International Nuclear Information System (INIS)

    Yu sun, Park

    2007-01-01

    The buoyancy driven convective flow fields are steady circulatory flows which were made between surfaces maintained at two fixed temperatures. They are ubiquitous in nature and play an important role in many engineering applications. Application of a natural convection can reduce the costs and efforts remarkably. This paper focuses on the sensitivity study of turbulence analysis using CFD (Computational Fluid Dynamics) for a natural convection in a closed rectangular cavity. Using commercial CFD code, FLUENT and various turbulent models were applied to the turbulent flow. Results from each CFD model will be compared each other in the viewpoints of grid resolution and flow characteristics. It has been showed that: -) obtaining general flow characteristics is possible with relatively coarse grid; -) there is no significant difference between results from finer grid resolutions than grid with y + + is defined as y + = ρ*u*y/μ, u being the wall friction velocity, y being the normal distance from the center of the cell to the wall, ρ and μ being respectively the fluid density and the fluid viscosity; -) the K-ε models show a different flow characteristic from K-ω models or from the Reynolds Stress Model (RSM); and -) the y + parameter is crucial for the selection of the appropriate turbulence model to apply within the simulation

  19. Macroscopic Model and Simulation Analysis of Air Traffic Flow in Airport Terminal Area

    Directory of Open Access Journals (Sweden)

    Honghai Zhang

    2014-01-01

    Full Text Available We focus on the spatiotemporal characteristics and their evolvement law of the air traffic flow in airport terminal area to provide scientific basis for optimizing flight control processes and alleviating severe air traffic conditions. Methods in this work combine mathematical derivation and simulation analysis. Based on cell transmission model the macroscopic models of arrival and departure air traffic flow in terminal area are established. Meanwhile, the interrelationship and influential factors of the three characteristic parameters as traffic flux, density, and velocity are presented. Then according to such models, the macro emergence of traffic flow evolution is emulated with the NetLogo simulation platform, and the correlativity of basic traffic flow parameters is deduced and verified by means of sensitivity analysis. The results suggest that there are remarkable relations among the three characteristic parameters of the air traffic flow in terminal area. Moreover, such relationships evolve distinctly with the flight procedures, control separations, and ATC strategies.

  20. Chemical kinetic functional sensitivity analysis: Elementary sensitivities

    International Nuclear Information System (INIS)

    Demiralp, M.; Rabitz, H.

    1981-01-01

    Sensitivity analysis is considered for kinetics problems defined in the space--time domain. This extends an earlier temporal Green's function method to handle calculations of elementary functional sensitivities deltau/sub i//deltaα/sub j/ where u/sub i/ is the ith species concentration and α/sub j/ is the jth system parameter. The system parameters include rate constants, diffusion coefficients, initial conditions, boundary conditions, or any other well-defined variables in the kinetic equations. These parameters are generally considered to be functions of position and/or time. Derivation of the governing equations for the sensitivities and the Green's funciton are presented. The physical interpretation of the Green's function and sensitivities is given along with a discussion of the relation of this work to earlier research

  1. Probabilistic sensitivity analysis of biochemical reaction systems.

    Science.gov (United States)

    Zhang, Hong-Xuan; Dempsey, William P; Goutsias, John

    2009-09-07

    Sensitivity analysis is an indispensable tool for studying the robustness and fragility properties of biochemical reaction systems as well as for designing optimal approaches for selective perturbation and intervention. Deterministic sensitivity analysis techniques, using derivatives of the system response, have been extensively used in the literature. However, these techniques suffer from several drawbacks, which must be carefully considered before using them in problems of systems biology. We develop here a probabilistic approach to sensitivity analysis of biochemical reaction systems. The proposed technique employs a biophysically derived model for parameter fluctuations and, by using a recently suggested variance-based approach to sensitivity analysis [Saltelli et al., Chem. Rev. (Washington, D.C.) 105, 2811 (2005)], it leads to a powerful sensitivity analysis methodology for biochemical reaction systems. The approach presented in this paper addresses many problems associated with derivative-based sensitivity analysis techniques. Most importantly, it produces thermodynamically consistent sensitivity analysis results, can easily accommodate appreciable parameter variations, and allows for systematic investigation of high-order interaction effects. By employing a computational model of the mitogen-activated protein kinase signaling cascade, we demonstrate that our approach is well suited for sensitivity analysis of biochemical reaction systems and can produce a wealth of information about the sensitivity properties of such systems. The price to be paid, however, is a substantial increase in computational complexity over derivative-based techniques, which must be effectively addressed in order to make the proposed approach to sensitivity analysis more practical.

  2. A hybrid approach for global sensitivity analysis

    International Nuclear Information System (INIS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2017-01-01

    Distribution based sensitivity analysis (DSA) computes sensitivity of the input random variables with respect to the change in distribution of output response. Although DSA is widely appreciated as the best tool for sensitivity analysis, the computational issue associated with this method prohibits its use for complex structures involving costly finite element analysis. For addressing this issue, this paper presents a method that couples polynomial correlated function expansion (PCFE) with DSA. PCFE is a fully equivalent operational model which integrates the concepts of analysis of variance decomposition, extended bases and homotopy algorithm. By integrating PCFE into DSA, it is possible to considerably alleviate the computational burden. Three examples are presented to demonstrate the performance of the proposed approach for sensitivity analysis. For all the problems, proposed approach yields excellent results with significantly reduced computational effort. The results obtained, to some extent, indicate that proposed approach can be utilized for sensitivity analysis of large scale structures. - Highlights: • A hybrid approach for global sensitivity analysis is proposed. • Proposed approach integrates PCFE within distribution based sensitivity analysis. • Proposed approach is highly efficient.

  3. Maternal sensitivity: a concept analysis.

    Science.gov (United States)

    Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae

    2008-11-01

    The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.

  4. Subcubic Control Flow Analysis Algorithms

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Van Horn, David

    We give the first direct subcubic algorithm for performing control flow analysis of higher-order functional programs. Despite the long held belief that inclusion-based flow analysis could not surpass the ``cubic bottleneck, '' we apply known set compression techniques to obtain an algorithm...... that runs in time O(n^3/log n) on a unit cost random-access memory model machine. Moreover, we refine the initial flow analysis into two more precise analyses incorporating notions of reachability. We give subcubic algorithms for these more precise analyses and relate them to an existing analysis from...

  5. Sensitivity analysis of Immersed Boundary Method simulations of fluid flow in dense polydisperse random grain packings

    Directory of Open Access Journals (Sweden)

    Knight Chris

    2017-01-01

    Full Text Available Polydisperse granular materials are ubiquitous in nature and industry. Despite this, knowledge of the momentum coupling between the fluid and solid phases in dense saturated grain packings comes almost exclusively from empirical correlations [2–4, 8] with monosized media. The Immersed Boundary Method (IBM is a Computational Fluid Dynamics (CFD modelling technique capable of resolving pore scale fluid flow and fluid-particle interaction forces in polydisperse media at the grain scale. Validation of the IBM in the low Reynolds number, high concentration limit was performed by comparing simulations of flow through ordered arrays of spheres with the boundary integral results of Zick and Homsy [10]. Random grain packings were studied with linearly graded particle size distributions with a range of coefficient of uniformity values (Cu = 1.01, 1.50, and 2.00 at a range of concentrations (ϕ ∈ [0.396; 0.681] in order to investigate the influence of polydispersity on drag and permeability. The sensitivity of the IBM results to the choice of radius retraction parameter [1] was investigated and a comparison was made between the predicted forces and the widely used Ergun correlation [3].

  6. Global sensitivity analysis of water age and temperature for informing salmonid disease management

    Science.gov (United States)

    Javaheri, Amir; Babbar-Sebens, Meghna; Alexander, Julie; Bartholomew, Jerri; Hallett, Sascha

    2018-06-01

    Many rivers in the Pacific Northwest region of North America are anthropogenically manipulated via dam operations, leading to system-wide impacts on hydrodynamic conditions and aquatic communities. Understanding how dam operations alter abiotic and biotic variables is important for designing management actions. For example, in the Klamath River, dam outflows could be manipulated to alter water age and temperature to reduce risk of parasite infections in salmon by diluting or altering viability of parasite spores. However, sensitivity of water age and temperature to the riverine conditions such as bathymetry can affect outcomes from dam operations. To examine this issue in detail, we conducted a global sensitivity analysis of water age and temperature to a comprehensive set of hydraulics and meteorological parameters in the Klamath River, California, where management of salmonid disease is a high priority. We applied an analysis technique, which combined Latin-hypercube and one-at-a-time sampling methods, and included simulation runs with the hydrodynamic numerical model of the Lower Klamath. We found that flow rate and bottom roughness were the two most important parameters that influence water age. Water temperature was more sensitive to inflow temperature, air temperature, solar radiation, wind speed, flow rate, and wet bulb temperature respectively. Our results are relevant for managers because they provide a framework for predicting how water within 'high infection risk' sections of the river will respond to dam water (low infection risk) input. Moreover, these data will be useful for prioritizing the use of water age (dilution) versus temperature (spore viability) under certain contexts when considering flow manipulation as a method to reduce risk of infection and disease in Klamath River salmon.

  7. Deterministic sensitivity analysis for the numerical simulation of contaminants transport

    International Nuclear Information System (INIS)

    Marchand, E.

    2007-12-01

    The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)

  8. A Fuel-Sensitive Reduced-Order Model (ROM) for Piston Engine Scaling Analysis

    Science.gov (United States)

    2017-09-29

    of high Reynolds number nonreacting and reacting JP-8 sprays in a constant pressure flow vessel with a detailed chemistry approach . J Energy Resour...for rapid grid generation applied to in-cylinder diesel engine simulations. Society of Automotive Engineers ; 2007 Apr. SAE Technical Paper No.: 2007...ARL-TR-8172 ● Sep 2017 US Army Research Laboratory A Fuel-Sensitive Reduced-Order Model (ROM) for Piston Engine Scaling Analysis

  9. A surrogate-based sensitivity quantification and Bayesian inversion of a regional groundwater flow model

    Science.gov (United States)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor

    2018-02-01

    Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.

  10. Integrated Cantilever-Based Flow Sensors with Tunable Sensitivity for In-Line Monitoring of Flow Fluctuations in Microfluidic Systems

    Directory of Open Access Journals (Sweden)

    Nadine Noeth

    2013-12-01

    Full Text Available For devices such as bio-/chemical sensors in microfluidic systems, flow fluctuations result in noise in the sensor output. Here, we demonstrate in-line monitoring of flow fluctuations with a cantilever-like sensor integrated in a microfluidic channel. The cantilevers are fabricated in different materials (SU-8 and SiN and with different thicknesses. The integration of arrays of holes with different hole size and number of holes allows the modification of device sensitivity, theoretical detection limit and measurement range. For an average flow in the microliter range, the cantilever deflection is directly proportional to the flow rate fluctuations in the microfluidic channel. The SiN cantilevers show a detection limit below 1 nL/min and the thinnest SU-8 cantilevers a detection limit below 5 nL/min. Finally, the sensor is applied for in-line monitoring of flow fluctuations generated by external pumps connected to the microfluidic system.

  11. Sensitivity analysis on the model to the DO and BODc of the Almendares river

    International Nuclear Information System (INIS)

    Dominguez, J.; Borroto, J.; Hernandez, A.

    2004-01-01

    In the present work, the sensitivity analysis of the model was done, to compare and evaluate the influence of the kinetic coefficients and other parameters, on the DO and BODc. The effect of the BODc and the DO which the river arrives to the studied zone, the influence of the BDO of the discharges and the flow rate, on the DO was modeled. The sensitivity analysis is the base for developing a calibration optimization procedure of the Streeter Phelps model, in order to make easier the process and to increase the precision of predictions. In the other hand, it will contribute to the definition of the strategies to improve river water quality

  12. Basic study on an energy conversion system using boiling two-phase flows of temperature-sensitive magnetic fluid. Theoretical analysis based on thermal nonequilibrium model and flow visualization using ultrasonic echo

    International Nuclear Information System (INIS)

    Ishimoto, Jun; Kamiyama, Shinichi; Okubo, Masaaki.

    1995-01-01

    Effects of magnetic field on the characteristics of boiling two-phase pipe flow of temperature-sensitive magnetic fluid are clarified in detail both theoretically and experimentally. Firstly, governing equations of two-phase magnetic fluid flow based on the thermal nonequilibrium two-fluid model are presented and numerically solved considering evaporation and condensation between gas- and liquid-phases. Next, behaviour of vapor bubbles is visualized with ultrasonic echo in the region of nonuniform magnetic field. This is recorded and processed with an image processor. As a result, the distributions of void fraction in the two-phase flow are obtained. Furthermore, detailed characteristics of the two-phase magnetic fluid flow are investigated using a small test loop of the new energy conversion system. From the numerical and experimental results, it is known that the precise control of the boiling two-phase flow and bubble generation is possible by using the nonuniform magnetic field effectively. These fundamental studies on the characteristics of two-phase magnetic fluid flow will contribute to the development of the new energy conversion system using a gas-liquid boiling two-phase flow of magnetic fluid. (author)

  13. Information Flow Analysis for VHDL

    DEFF Research Database (Denmark)

    Tolstrup, Terkel Kristian; Nielson, Flemming; Nielson, Hanne Riis

    2005-01-01

    We describe a fragment of the hardware description language VHDL that is suitable for implementing the Advanced Encryption Standard algorithm. We then define an Information Flow analysis as required by the international standard Common Criteria. The goal of the analysis is to identify the entire...... information flow through the VHDL program. The result of the analysis is presented as a non-transitive directed graph that connects those nodes (representing either variables or signals) where an information flow might occur. We compare our approach to that of Kemmerer and conclude that our approach yields...

  14. Fast Virtual Fractional Flow Reserve Based Upon Steady-State Computational Fluid Dynamics Analysis

    Directory of Open Access Journals (Sweden)

    Paul D. Morris, PhD

    2017-08-01

    Full Text Available Fractional flow reserve (FFR-guided percutaneous intervention is superior to standard assessment but remains underused. The authors have developed a novel “pseudotransient” analysis protocol for computing virtual fractional flow reserve (vFFR based upon angiographic images and steady-state computational fluid dynamics. This protocol generates vFFR results in 189 s (cf >24 h for transient analysis using a desktop PC, with <1% error relative to that of full-transient computational fluid dynamics analysis. Sensitivity analysis demonstrated that physiological lesion significance was influenced less by coronary or lesion anatomy (33% and more by microvascular physiology (59%. If coronary microvascular resistance can be estimated, vFFR can be accurately computed in less time than it takes to make invasive measurements.

  15. Interference and Sensitivity Analysis.

    Science.gov (United States)

    VanderWeele, Tyler J; Tchetgen Tchetgen, Eric J; Halloran, M Elizabeth

    2014-11-01

    Causal inference with interference is a rapidly growing area. The literature has begun to relax the "no-interference" assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted.

  16. Adaptive population divergence and directional gene flow across steep elevational gradients in a climate‐sensitive mammal

    Science.gov (United States)

    Waterhouse, Matthew D.; Erb, Liesl P.; Beever, Erik; Russello, Michael A.

    2018-01-01

    The American pika is a thermally sensitive, alpine lagomorph species. Recent climate-associated population extirpations and genetic signatures of reduced population sizes range-wide indicate the viability of this species is sensitive to climate change. To test for potential adaptive responses to climate stress, we sampled pikas along two elevational gradients (each ~470 to 1640 m) and employed three outlier detection methods, BAYESCAN, LFMM, and BAYPASS, to scan for genotype-environment associations in samples genotyped at 30,763 SNP loci. We resolved 173 loci with robust evidence of natural selection detected by either two independent analyses or replicated in both transects. A BLASTN search of these outlier loci revealed several genes associated with metabolic function and oxygen transport, indicating natural selection from thermal stress and hypoxia. We also found evidence of directional gene flow primarily downslope from large high-elevation populations and reduced gene flow at outlier loci, a pattern suggesting potential impediments to the upward elevational movement of adaptive alleles in response to contemporary climate change. Finally, we documented evidence of reduced genetic diversity associated the south-facing transect and an increase in corticosterone stress levels associated with inbreeding. This study suggests the American pika is already undergoing climate-associated natural selection at multiple genomic regions. Further analysis is needed to determine if the rate of climate adaptation in the American pika and other thermally sensitive species will be able to keep pace with rapidly changing climate conditions.

  17. The determination, by flow-injection analysis, of iron, sulphate, silver and cadmium

    International Nuclear Information System (INIS)

    Jones, E.A.

    1983-01-01

    This report describes the spectrophotometric determination by flow-injection analysis including, where necessary, liquid-liquid extraction of iron with 1,10-phenanthroline; of sulphate by its catalytic effect on the methylthymol blue-zirconium reaction; of silver with bromopyrogallol red and 1,10-phenanthroline; and of cadmium with dithizone. Optimum conditions for each system are established, and sensitivities and ranges of determination are given

  18. Specification of a test problem for HYDROCOIN [Hydrologic Code Intercomparison] Level 3 Case 2: Sensitivity analysis for deep disposal in partially saturated, fractured tuff

    International Nuclear Information System (INIS)

    Prindle, R.W.

    1987-08-01

    The international Hydrologic Code Intercomparison Project (HYDROCOIN) was formed to evaluate hydrogeologic models and computer codes and their use in performance assessment for high-level radioactive waste repositories. Three principal activities in the HYDROCOIN Project are Level 1, verification and benchmarking of hydrologic codes; Level 2, validation of hydrologic models; and Level 3, sensitivity and uncertainty analyses of the models and codes. This report presents a test case defined for the HYDROCOIN Level 3 activity to explore the feasibility of applying various sensitivity-analysis methodologies to a highly nonlinear model of isothermal, partially saturated flow through fractured tuff, and to develop modeling approaches to implement the methodologies for sensitivity analysis. These analyses involve an idealized representation of a repository sited above the water table in a layered sequence of welded and nonwelded, fractured, volcanic tuffs. The analyses suggested here include one-dimensional, steady flow; one-dimensional, nonsteady flow; and two-dimensional, steady flow. Performance measures to be used to evaluate model sensitivities are also defined; the measures are related to regulatory criteria for containment of high-level radioactive waste. 14 refs., 5 figs., 4 tabs

  19. Artificial sensory hairs based on the flow sensitive receptor hairs of crickets

    NARCIS (Netherlands)

    Dijkstra, Marcel; van Baar, J.J.J.; Wiegerink, Remco J.; Lammerink, Theodorus S.J.; de Boer, J.H.; Krijnen, Gijsbertus J.M.

    2005-01-01

    This paper presents the modelling, design, fabrication and characterization of flow sensors based on the wind-receptor hairs of crickets. Cricket sensory hairs are highly sensitive to drag-forces exerted on the hair shaft. Artificial sensory hairs have been realized in SU-8 on suspended SixNy

  20. Sensitivity analysis for thermo-hydraulics model of a Westinghouse type PWR. Verification of the simulation results

    Energy Technology Data Exchange (ETDEWEB)

    Farahani, Aref Zarnooshe [Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Dept. of Nuclear Engineering, Science and Research Branch; Yousefpour, Faramarz [Nuclear Science and Technology Research Institute, Tehran (Iran, Islamic Republic of); Hoseyni, Seyed Mohsen [Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Dept. of Basic Sciences; Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Young Researchers and Elite Club

    2017-07-15

    Development of a steady-state model is the first step in nuclear safety analysis. The developed model should be qualitatively analyzed first, then a sensitivity analysis is required on the number of nodes for models of different systems to ensure the reliability of the obtained results. This contribution aims to show through sensitivity analysis, the independence of modeling results to the number of nodes in a qualified MELCOR model for a Westinghouse type pressurized power plant. For this purpose, and to minimize user error, the nuclear analysis software, SNAP, is employed. Different sensitivity cases were developed by modification of the existing model and refinement of the nodes for the simulated systems including steam generators, reactor coolant system and also reactor core and its connecting flow paths. By comparing the obtained results to those of the original model no significant difference is observed which is indicative of the model independence to the finer nodes.

  1. Contributions to sensitivity analysis and generalized discriminant analysis

    International Nuclear Information System (INIS)

    Jacques, J.

    2005-12-01

    Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)

  2. Using Crossflow for Flow Measurements and Flow Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gurevich, A.; Chudnovsky, L.; Lopeza, A. [Advanced Measurement and Analysis Group Inc., Ontario (Canada); Park, M. H. [Sungjin Nuclear Engineering Co., Ltd., Gyeongju (Korea, Republic of)

    2016-10-15

    Ultrasonic Cross Correlation Flow Measurements are based on a flow measurement method that is based on measuring the transport time of turbulent structures. The cross correlation flow meter CROSSFLOW is designed and manufactured by Advanced Measurement and Analysis Group Inc. (AMAG), and is used around the world for various flow measurements. Particularly, CROSSFLOW has been used for boiler feedwater flow measurements, including Measurement Uncertainty Recovery (MUR) reactor power uprate in 14 nuclear reactors in the United States and in Europe. More than 100 CROSSFLOW transducers are currently installed in CANDU reactors around the world, including Wolsung NPP in Korea, for flow verification in ShutDown System (SDS) channels. Other CROSSFLOW applications include reactor coolant gross flow measurements, reactor channel flow measurements in all channels in CANDU reactors, boiler blowdown flow measurement, and service water flow measurement. Cross correlation flow measurement is a robust ultrasonic flow measurement tool used in nuclear power plants around the world for various applications. Mathematical modeling of the CROSSFLOW agrees well with laboratory test results and can be used as a tool in determining the effect of flow conditions on CROSSFLOW output and on designing and optimizing laboratory testing, in order to ensure traceability of field flow measurements to laboratory testing within desirable uncertainty.

  3. Simultaneous velocity and pressure quantification using pressure-sensitive flow tracers in air

    Science.gov (United States)

    Zhang, Peng; Peterson, Sean; Porfiri, Maurizio

    2017-11-01

    Particle-based measurement techniques for assessing the velocity field of a fluid have advanced rapidly over the past two decades. Full-field pressure measurement techniques have remained elusive, however. In this work, we aim to demonstrate the possibility of direct simultaneous planar velocity and pressure measurement of a high speed aerodynamic flow by employing novel pressure-sensitive tracer particles for particle image velocimetry (PIV). Specifically, the velocity and pressure variations of an airflow through a converging-diverging channel are studied. Polystyrene microparticles embedded with a pressure-sensitive phosphorescent dye-platinum octaethylporphyrin (PtOEP)-are used as seeding particles. Due to the oxygen quenching effect, the emission lifetime of PtOEP is highly sensitive to the oxygen concentration, that is, the partial pressure of oxygen, in the air. Since the partial pressure of oxygen is linearly proportional to the air pressure, we can determine the air pressure through the phosphorescence emission lifetime of the dye. The velocity field is instead obtained using traditional PIV methods. The particles have a pressure resolution on the order of 1 kPa, which may be improved by optimizing the particle size and dye concentration to suit specific flow scenarios. This work was supported by the National Science Foundation under Grant Number CBET-1332204.

  4. Sensitivity Analysis of a Riparian Vegetation Growth Model

    Directory of Open Access Journals (Sweden)

    Michael Nones

    2016-11-01

    Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.

  5. Buck Creek River Flow Analysis

    Science.gov (United States)

    Dhanapala, Yasas; George, Elizabeth; Ritter, John

    2009-04-01

    Buck Creek flowing through Springfield Ohio has a number of low-head dams currently in place that cause safety issues and sometimes make it impossible for recreational boaters to pass through. The safety issues include the back eddies created by the dams that are known as drowning machines and the hydraulic jumps. In this study we are modeling the flow of Buck Creek using topographical and flow data provided by the Geology Department of Wittenberg University. The flow is analyzed using Hydraulic Engineering Center - River Analysis System software (HEC-RAS). As the first step a model of the river near Snyder Park has been created with the current structure in place for validation purposes. Afterwards the low-head dam is replaced with four drop structures with V-notch overflow gates. The river bed is altered to reflect plunge pools after each drop structure. This analysis will provide insight to how the flow is going to behave after the changes are made. In addition a sediment transport analysis is also being conducted to provide information about the stability of these structures.

  6. Improving sensitivity in micro-free flow electrophoresis using signal averaging

    Science.gov (United States)

    Turgeon, Ryan T.; Bowser, Michael T.

    2009-01-01

    Microfluidic free-flow electrophoresis (μFFE) is a separation technique that separates continuous streams of analytes as they travel through an electric field in a planar flow channel. The continuous nature of the μFFE separation suggests that approaches more commonly applied in spectroscopy and imaging may be effective in improving sensitivity. The current paper describes the S/N improvements that can be achieved by simply averaging multiple images of a μFFE separation; 20–24-fold improvements in S/N were observed by averaging the signal from 500 images recorded for over 2 min. Up to an 80-fold improvement in S/N was observed by averaging 6500 images. Detection limits as low as 14 pM were achieved for fluorescein, which is impressive considering the non-ideal optical set-up used in these experiments. The limitation to this signal averaging approach was the stability of the μFFE separation. At separation times longer than 20 min bubbles began to form at the electrodes, which disrupted the flow profile through the device, giving rise to erratic peak positions. PMID:19319908

  7. Construction and analysis of compressible flow calculation algorithms

    International Nuclear Information System (INIS)

    Desideri, Jean-Antoine

    1993-01-01

    The aim of this study is to give a theoretical rationale of a 'paradox' related to the behavior at the stagnation point of some numerical solutions obtained by conventional methods for Eulerian non-equilibrium flows. This 'paradox' concerns the relationship between the solutions given by equilibrium and non-equilibrium models and was raised by several experts during the 'Workshop on Hypersonic Flows for Reentry Problems, Part 1. Antibes 1990'. In the first part, we show that equilibrium conditions are reached at the stagnation point and we analyse the sensitivity of these equilibrium conditions to the flow variables. In the second part, we develop an analysis of the behavior of the mathematical solution to an Eulerian non-equilibrium flow in the vicinity of the stagnation point, which gives an explanation to the described 'paradox'. Then, a numerical procedure, integrating the species convection equations projected on the stagnation point streamline in a Lagrangian time approach, gives a numerical support to the theoretical predictions. We also propose two numerical integration procedures, that allow us to recompute, starting from the equilibrium conditions at the stagnation point, the flow characteristics at the body. The validity limits of these procedures are discussed and the results obtained for a Workshop test-case are compared with the results given by several contributors. Finally, we survey briefly the influence of the local behavior of the solution on the coupling technique to a boundary layer calculation. (author) [fr

  8. Real-Time and In-Flow Sensing Using a High Sensitivity Porous Silicon Microcavity-Based Sensor.

    Science.gov (United States)

    Caroselli, Raffaele; Martín Sánchez, David; Ponce Alcántara, Salvador; Prats Quilez, Francisco; Torrijos Morán, Luis; García-Rupérez, Jaime

    2017-12-05

    Porous silicon seems to be an appropriate material platform for the development of high-sensitivity and low-cost optical sensors, as their porous nature increases the interaction with the target substances, and their fabrication process is very simple and inexpensive. In this paper, we present the experimental development of a porous silicon microcavity sensor and its use for real-time in-flow sensing application. A high-sensitivity configuration was designed and then fabricated, by electrochemically etching a silicon wafer. Refractive index sensing experiments were realized by flowing several dilutions with decreasing refractive indices, and measuring the spectral shift in real-time. The porous silicon microcavity sensor showed a very linear response over a wide refractive index range, with a sensitivity around 1000 nm/refractive index unit (RIU), which allowed us to directly detect refractive index variations in the 10 -7 RIU range.

  9. Integration of lyoplate based flow cytometry and computational analysis for standardized immunological biomarker discovery.

    Directory of Open Access Journals (Sweden)

    Federica Villanova

    Full Text Available Discovery of novel immune biomarkers for monitoring of disease prognosis and response to therapy in immune-mediated inflammatory diseases is an important unmet clinical need. Here, we establish a novel framework for immunological biomarker discovery, comparing a conventional (liquid flow cytometry platform (CFP and a unique lyoplate-based flow cytometry platform (LFP in combination with advanced computational data analysis. We demonstrate that LFP had higher sensitivity compared to CFP, with increased detection of cytokines (IFN-γ and IL-10 and activation markers (Foxp3 and CD25. Fluorescent intensity of cells stained with lyophilized antibodies was increased compared to cells stained with liquid antibodies. LFP, using a plate loader, allowed medium-throughput processing of samples with comparable intra- and inter-assay variability between platforms. Automated computational analysis identified novel immunophenotypes that were not detected with manual analysis. Our results establish a new flow cytometry platform for standardized and rapid immunological biomarker discovery with wide application to immune-mediated diseases.

  10. Integration of lyoplate based flow cytometry and computational analysis for standardized immunological biomarker discovery.

    Science.gov (United States)

    Villanova, Federica; Di Meglio, Paola; Inokuma, Margaret; Aghaeepour, Nima; Perucha, Esperanza; Mollon, Jennifer; Nomura, Laurel; Hernandez-Fuentes, Maria; Cope, Andrew; Prevost, A Toby; Heck, Susanne; Maino, Vernon; Lord, Graham; Brinkman, Ryan R; Nestle, Frank O

    2013-01-01

    Discovery of novel immune biomarkers for monitoring of disease prognosis and response to therapy in immune-mediated inflammatory diseases is an important unmet clinical need. Here, we establish a novel framework for immunological biomarker discovery, comparing a conventional (liquid) flow cytometry platform (CFP) and a unique lyoplate-based flow cytometry platform (LFP) in combination with advanced computational data analysis. We demonstrate that LFP had higher sensitivity compared to CFP, with increased detection of cytokines (IFN-γ and IL-10) and activation markers (Foxp3 and CD25). Fluorescent intensity of cells stained with lyophilized antibodies was increased compared to cells stained with liquid antibodies. LFP, using a plate loader, allowed medium-throughput processing of samples with comparable intra- and inter-assay variability between platforms. Automated computational analysis identified novel immunophenotypes that were not detected with manual analysis. Our results establish a new flow cytometry platform for standardized and rapid immunological biomarker discovery with wide application to immune-mediated diseases.

  11. Usefulness of DC power flow for active power flow analysis with flow controlling devices

    NARCIS (Netherlands)

    Van Hertem, D.; Verboomen, J.; Purchala, K.; Belmans, R.; Kling, W.L.

    2006-01-01

    DC power flow is a commonly used tool for contingency analysis. Recently, due to its simplicity and robustness, it also becomes increasingly used for the real-time dispatch and techno-economic analysis of power systems. It is a simplification of a full power flow looking only at active power.

  12. Object-sensitive Type Analysis of PHP

    NARCIS (Netherlands)

    Van der Hoek, Henk Erik; Hage, J

    2015-01-01

    In this paper we develop an object-sensitive type analysis for PHP, based on an extension of the notion of monotone frameworks to deal with the dynamic aspects of PHP, and following the framework of Smaragdakis et al. for object-sensitive analysis. We consider a number of instantiations of the

  13. Size-sensitive particle trajectories in three-dimensional micro-bubble acoustic streaming flows

    Science.gov (United States)

    Volk, Andreas; Rossi, Massimiliano; Hilgenfeldt, Sascha; Rallabandi, Bhargav; Kähler, Christian; Marin, Alvaro

    2015-11-01

    Oscillating microbubbles generate steady streaming flows with interesting features and promising applications for microparticle manipulation. The flow around oscillating semi-cylindrical bubbles has been typically assumed to be independent of the axial coordinate. However, it has been recently revealed that particle motion is strongly three-dimensional: Small tracer particles follow vortical trajectories with pronounced axial displacements near the bubble, weaving a toroidal stream-surface. A well-known consequence of bubble streaming flows is size-dependent particle migration, which can be exploited for sorting and trapping of microparticles in microfluidic devices. In this talk, we will show how the three-dimensional toroidal topology found for small tracer particles is modified as the particle size increases up to 1/3 of the bubble radius. Our results show size-sensitive particle positioning along the axis of the semi-cylindrical bubble. In order to analyze the three-dimensional sorting and trapping capabilities of the system, experiments with an imposed flow and polydisperse particle solutions are also shown.

  14. A study of grout flow pattern analysis

    International Nuclear Information System (INIS)

    Lee, S. Y.; Hyun, S.

    2013-01-01

    A new disposal unit, designated as Salt Disposal Unit no. 6 (SDU6), is being designed for support of site accelerated closure goals and salt nuclear waste projections identified in the new Liquid Waste System plan. The unit is cylindrical disposal vault of 380 ft diameter and 43 ft in height, and it has about 30 million gallons of capacity. Primary objective was to develop the computational model and to perform the evaluations for the flow patterns of grout material in SDU6 as function of elevation of grout discharge port, and slurry rheology. A Bingham plastic model was basically used to represent the grout flow behavior. A two-phase modeling approach was taken to achieve the objective. This approach assumes that the air-grout interface determines the shape of the accumulation mound. The results of this study were used to develop the design guidelines for the discharge ports of the Saltstone feed materials in the SDU6 facility. The focusing areas of the modeling study are to estimate the domain size of the grout materials radially spread on the facility floor under the baseline modeling conditions, to perform the sensitivity analysis with respect to the baseline design and operating conditions such as elevation of discharge port, discharge pipe diameter, and grout properties, and to determine the changes in grout density as it is related to grout drop height. An axi-symmetric two-phase modeling method was used for computational efficiency. Based on the nominal design and operating conditions, a transient computational approach was taken to compute flow fields mainly driven by pumping inertia and natural gravity. Detailed solution methodology and analysis results are discussed here

  15. Multidirectional flow analysis by cardiovascular magnetic resonance in aneurysm development following repair of aortic coarctation

    Directory of Open Access Journals (Sweden)

    Stalder Aurelien F

    2008-06-01

    Full Text Available Abstract Aneurysm formation is a life-threatening complication after operative therapy in coarctation. The identification of patients at risk for the development of such secondary pathologies is of high interest and requires a detailed understanding of the link between vascular malformation and altered hemodynamics. The routine morphometric follow-up by magnetic resonance angiography is a well-established technique. However, the intrinsic sensitivity of magnetic resonance (MR towards motion offers the possibility to additionally investigate hemodynamic consequences of morphological changes of the aorta. We demonstrate two cases of aneurysm formation 13 and 35 years after coarctation surgery based on a Waldhausen repair with a subclavian patch and a Vosschulte repair with a Dacron patch, respectively. Comprehensive flow visualization by cardiovascular MR (CMR was performed using a flow-sensitive, 3-dimensional, and 3-directional time-resolved gradient echo sequence at 3T. Subsequent analysis included the calculation of a phase contrast MR angiography and color-coded streamline and particle trace 3D visualization. Additional quantitative evaluation provided regional physiological information on blood flow and derived vessel wall parameters such as wall shear stress and oscillatory shear index. The results highlight the individual 3D blood-flow patterns associated with the different vascular pathologies following repair of aortic coarctation. In addition to known factors predisposing for aneurysm formation after surgical repair of coarctation these findings indicate the importance of flow sensitive CMR to follow up hemodynamic changes with respect to the development of vascular disease.

  16. The diagnostic performance of CT-derived fractional flow reserve for evaluation of myocardial ischaemia confirmed by invasive fractional flow reserve: a meta-analysis.

    Science.gov (United States)

    Li, S; Tang, X; Peng, L; Luo, Y; Dong, R; Liu, J

    2015-05-01

    To review the literature on the diagnostic accuracy of CT-derived fractional flow reserve (FFRCT) for the evaluation of myocardial ischaemia in patients with suspected or known coronary artery disease, with invasive fractional flow reserve (FFR) as the reference standard. A PubMed, EMBASE, and Cochrane cross-search was performed. The pooled diagnostic accuracy of FFRCT, with FFR as the reference standard, was primarily analysed, and then compared with that of CT angiography (CTA). The thresholds to diagnose ischaemia were FFR ≤0.80 or CTA ≥50% stenosis. Data extraction, synthesis, and statistical analysis were performed by standard meta-analysis methods. Three multicentre studies (NXT Trial, DISCOVER-FLOW study and DeFACTO study) were included, examining 609 patients and 1050 vessels. The pooled sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (LR+), negative likelihood ratio (LR-), and diagnostic odds ratio (DOR) for FFRCT were 89% (85-93%), 71% (65-75%), 70% (65-75%), 90% (85-93%), 3.31 (1.79-6.14), 0.16 (0.11-0.23), and 21.21 (9.15-49.15) at the patient-level, and 83% (78-63%), 78% (75-81%), 61% (56-65%), 92% (89-90%), 4.02 (1.84-8.80), 0.22 (0.13-0.35), and 19.15 (5.73-63.93) at the vessel-level. At per-patient analysis, FFRCT has similar sensitivity but improved specificity, PPV, NPV, LR+, LR-, and DOR versus those of CTA. At per-vessel analysis, FFRCT had a slightly lower sensitivity, similar NPV, but improved specificity, PPV, LR+, LR-, and DOR compared with those of CTA. The area under the summary receiver operating characteristic curves for FFRCT was 0.8909 at patient-level and 0.8865 at vessel-level, versus 0.7402 for CTA at patient-level. FFRCT, which was associated with improved diagnostic accuracy versus CTA, is a viable alternative to FFR for detecting coronary ischaemic lesions. Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  17. Ethical sensitivity in professional practice: concept analysis.

    Science.gov (United States)

    Weaver, Kathryn; Morse, Janice; Mitcham, Carl

    2008-06-01

    This paper is a report of a concept analysis of ethical sensitivity. Ethical sensitivity enables nurses and other professionals to respond morally to the suffering and vulnerability of those receiving professional care and services. Because of its significance to nursing and other professional practices, ethical sensitivity deserves more focused analysis. A criteria-based method oriented toward pragmatic utility guided the analysis of 200 papers and books from the fields of nursing, medicine, psychology, dentistry, clinical ethics, theology, education, law, accounting or business, journalism, philosophy, political and social sciences and women's studies. This literature spanned 1970 to 2006 and was sorted by discipline and concept dimensions and examined for concept structure and use across various contexts. The analysis was completed in September 2007. Ethical sensitivity in professional practice develops in contexts of uncertainty, client suffering and vulnerability, and through relationships characterized by receptivity, responsiveness and courage on the part of professionals. Essential attributes of ethical sensitivity are identified as moral perception, affectivity and dividing loyalties. Outcomes include integrity preserving decision-making, comfort and well-being, learning and professional transcendence. Our findings promote ethical sensitivity as a type of practical wisdom that pursues client comfort and professional satisfaction with care delivery. The analysis and resulting model offers an inclusive view of ethical sensitivity that addresses some of the limitations with prior conceptualizations.

  18. Multitarget global sensitivity analysis of n-butanol combustion.

    Science.gov (United States)

    Zhou, Dingyu D Y; Davis, Michael J; Skodje, Rex T

    2013-05-02

    A model for the combustion of butanol is studied using a recently developed theoretical method for the systematic improvement of the kinetic mechanism. The butanol mechanism includes 1446 reactions, and we demonstrate that it is straightforward and computationally feasible to implement a full global sensitivity analysis incorporating all the reactions. In addition, we extend our previous analysis of ignition-delay targets to include species targets. The combination of species and ignition targets leads to multitarget global sensitivity analysis, which allows for a more complete mechanism validation procedure than we previously implemented. The inclusion of species sensitivity analysis allows for a direct comparison between reaction pathway analysis and global sensitivity analysis.

  19. Real-Time and In-Flow Sensing Using a High Sensitivity Porous Silicon Microcavity-Based Sensor

    Directory of Open Access Journals (Sweden)

    Raffaele Caroselli

    2017-12-01

    Full Text Available Porous silicon seems to be an appropriate material platform for the development of high-sensitivity and low-cost optical sensors, as their porous nature increases the interaction with the target substances, and their fabrication process is very simple and inexpensive. In this paper, we present the experimental development of a porous silicon microcavity sensor and its use for real-time in-flow sensing application. A high-sensitivity configuration was designed and then fabricated, by electrochemically etching a silicon wafer. Refractive index sensing experiments were realized by flowing several dilutions with decreasing refractive indices, and measuring the spectral shift in real-time. The porous silicon microcavity sensor showed a very linear response over a wide refractive index range, with a sensitivity around 1000 nm/refractive index unit (RIU, which allowed us to directly detect refractive index variations in the 10−7 RIU range.

  20. Pressure sensitivity of flow oscillations in postocclusive reactive skin hyperemia.

    Science.gov (United States)

    Strucl, M; Peterec, D; Finderle, Z; Maver, J

    1994-05-01

    Skin blood flow was monitored using a laser-Doppler (LD) flowmeter in 21 healthy volunteers after an occlusion of the digital arteries. The peripheral vascular bed was exposed to occlusion ischemia of varying duration (1, 4, or 8 min) and to a change in digital arterial pressure produced by different positions of the arm above heart level to characterize the pattern of LD flow oscillations in postocclusive reactive hyperemia (PRH) and to elucidate the relevance of metabolic and myogenic mechanisms in governing its fundamental frequency. The descending part of the hyperemic flow was characterized by the appearance of conspicuous periodic oscillations with a mean fundamental frequency of 7.2 +/- 1.5 cycles/min (SD, n = 9), as assessed by a Fourier transform frequency analysis of 50-s sections of flow. The mean respiratory frequency during the periods of flow frequency analysis was 17.0 +/- 2.2 (SD, n = 9), and the PRH oscillations remained during apnea in all tested subjects. The area under the maximum flow curve increased significantly with prolongation of the occlusion (paired t test, P blood pressure in the digital arteries, which suggests the predominant role of a metabolic component in this part of the PRH response. In contrast, the fundamental frequency of PRH oscillations exhibited a significant decrease with a reduction in the estimated digital arterial pressure (linear regression, b = 0.08, P < 0.001; n = 12), but did not change with the prolongation of arterial occlusion despite a significant increase in mean LD flow (paired t test, P < 0.001; n = 9).(ABSTRACT TRUNCATED AT 250 WORDS)

  1. Flow injection analysis using carbon film resistor electrodes for amperometric determination of ambroxol.

    Science.gov (United States)

    Felix, Fabiana S; Brett, Christopher M A; Angnes, Lúcio

    2008-06-30

    Flow injection analysis (FIA) using a carbon film sensor for amperometric detection was explored for ambroxol analysis in pharmaceutical formulations. The specially designed flow cell designed in the lab generated sharp and reproducible current peaks, with a wide linear dynamic range from 5x10(-7) to 3.5x10(-4) mol L(-1), in 0.1 mol L(-1) sulfuric acid electrolyte, as well as high sensitivity, 0.110 Amol(-1) L cm(-2) at the optimized flow rate. A detection limit of 7.6x10(-8) mol L(-1) and a sampling frequency of 50 determinations per hour were achieved, employing injected volumes of 100 microL and a flow rate of 2.0 mL min(-1). The repeatability, expressed as R.S.D. for successive and alternated injections of 6.0x10(-6) and 6.0x10(-5) mol L(-1) ambroxol solutions, was 3.0 and 1.5%, respectively, without any noticeable memory effect between injections. The proposed method was applied to the analysis of ambroxol in pharmaceutical samples and the results obtained were compared with UV spectrophotometric and acid-base titrimetric methods. Good agreement between the results utilizing the three methods and the labeled values was achieved, corroborating the good performance of the proposed electrochemical methodology for ambroxol analysis.

  2. Techniques for sensitivity analysis of SYVAC results

    International Nuclear Information System (INIS)

    Prust, J.O.

    1985-05-01

    Sensitivity analysis techniques may be required to examine the sensitivity of SYVAC model predictions to the input parameter values, the subjective probability distributions assigned to the input parameters and to the relationship between dose and the probability of fatal cancers plus serious hereditary disease in the first two generations of offspring of a member of the critical group. This report mainly considers techniques for determining the sensitivity of dose and risk to the variable input parameters. The performance of a sensitivity analysis technique may be improved by decomposing the model and data into subsets for analysis, making use of existing information on sensitivity and concentrating sampling in regions the parameter space that generates high doses or risks. A number of sensitivity analysis techniques are reviewed for their application to the SYVAC model including four techniques tested in an earlier study by CAP Scientific for the SYVAC project. This report recommends the development now of a method for evaluating the derivative of dose and parameter value and extending the Kruskal-Wallis technique to test for interactions between parameters. It is also recommended that the sensitivity of the output of each sub-model of SYVAC to input parameter values should be examined. (author)

  3. Sensitivity analysis of a PWR pressurizer

    International Nuclear Information System (INIS)

    Bruel, Renata Nunes

    1997-01-01

    A sensitivity analysis relative to the parameters and modelling of the physical process in a PWR pressurizer has been performed. The sensitivity analysis was developed by implementing the key parameters and theoretical model lings which generated a comprehensive matrix of influences of each changes analysed. The major influences that have been observed were the flashing phenomenon and the steam condensation on the spray drops. The present analysis is also applicable to the several theoretical and experimental areas. (author)

  4. Sensitivity Analysis of a Physiochemical Interaction Model ...

    African Journals Online (AJOL)

    In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 J. Appl. Sci. Environ. Manage. June, 2012, Vol.

  5. Vortex dynamics in a pipe T-junction: Recirculation and sensitivity

    Science.gov (United States)

    Chen, Kevin K.; Rowley, Clarence W.; Stone, Howard A.

    2015-03-01

    In the last few years, many researchers have noted that regions of recirculating flow often exhibit particularly high sensitivity to spatially localized feedback. We explore the flow through a T-shaped pipe bifurcation—a simple and ubiquitous, but generally poorly understood flow configuration—and provide a complex example of the relation between recirculation and sensitivity. When Re ≥ 320, a phenomenon resembling vortex breakdown occurs in four locations in the junction, with internal stagnation points appearing on vortex axes and causing flow reversal. The structure of the recirculation is similar to the traditional bubble-type breakdown. These recirculation regions are highly sensitive to spatially localized feedback in the linearized Navier-Stokes operator. The flow separation at the corners of the "T," however, does not exhibit this kind of sensitivity. We focus our analysis on the Reynolds number of 560, near the first Hopf bifurcation of the flow.

  6. Numerical Analysis of Turbulent Flow around Tube Bundle by Applying CAD Best Practice Guideline

    International Nuclear Information System (INIS)

    Lee, Gong Hee; Bang, Young Seok; Woo, Sweng Woong; Cheng, Ae Ju

    2013-01-01

    In this study, the numerical analysis of a turbulent flow around both a staggered and an incline tube bundle was conducted using Annoys Cfx V. 13, a commercial CAD software. The flow was assumed to be steady, incompressible, and isothermal. According to the CAD Best Practice Guideline, the sensitivity study for grid size, accuracy of the discretization scheme for convection term, and turbulence model was conducted, and its result was compared with the experimental data to estimate the applicability of the CAD Best Practice Guideline. It was concluded that the CAD Best Practice Guideline did not always guarantee an improvement in the prediction performance of the commercial CAD software in the field of tube bundle flow

  7. Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.

    Science.gov (United States)

    Kiparissides, A; Hatzimanikatis, V

    2017-01-01

    The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier

  8. Sensitivity Analysis of Uncertainty Parameter based on MARS-LMR Code on SHRT-45R of EBR II

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Seok-Ju; Kang, Doo-Hyuk; Seo, Jae-Seung [System Engineering and Technology Co., Daejeon (Korea, Republic of); Bae, Sung-Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeong, Hae-Yong [Sejong University, Seoul (Korea, Republic of)

    2016-10-15

    In order to assess the uncertainty quantification of the MARS-LMR code, the code has been improved by modifying the source code to accommodate calculation process required for uncertainty quantification. In the present study, a transient of Unprotected Loss of Flow(ULOF) is selected as typical cases of as Anticipated Transient without Scram(ATWS) which belongs to DEC category. The MARS-LMR input generation for EBR II SHRT-45R and execution works are performed by using the PAPIRUS program. The sensitivity analysis is carried out with Uncertainty Parameter of the MARS-LMR code for EBR-II SHRT-45R. Based on the results of sensitivity analysis, dominant parameters with large sensitivity to FoM are picked out. Dominant parameters selected are closely related to the development process of ULOF event.

  9. Flow analysis techniques for phosphorus: an overview.

    Science.gov (United States)

    Estela, José Manuel; Cerdà, Víctor

    2005-04-15

    A bibliographical review on the implementation and the results obtained in the use of different flow analytical techniques for the determination of phosphorus is carried out. The sources, occurrence and importance of phosphorus together with several aspects regarding the analysis and terminology used in the determination of this element are briefly described. A classification as well as a brief description of the basis, advantages and disadvantages of the different existing flow techniques, namely; segmented flow analysis (SFA), flow injection analysis (FIA), sequential injection analysis (SIA), all injection analysis (AIA), batch injection analysis (BIA), multicommutated FIA (MCFIA), multisyringe FIA (MSFIA) and multipumped FIA (MPFIA) is also carried out. The most relevant manuscripts regarding the analysis of phosphorus by means of flow techniques are herein classified according to the detection instrumental technique used with the aim to facilitate their study and obtain an overall scope. Finally, the analytical characteristics of numerous flow-methods reported in the literature are provided in the form of a table and their applicability to samples with different matrixes, namely water samples (marine, river, estuarine, waste, industrial, drinking, etc.), soils leachates, plant leaves, toothpaste, detergents, foodstuffs (wine, orange juice, milk), biological samples, sugars, fertilizer, hydroponic solutions, soils extracts and cyanobacterial biofilms are tabulated.

  10. Unsaturated Zone Flow Patterns and Analysis

    International Nuclear Information System (INIS)

    Ahlers, C.

    2001-01-01

    This Analysis/Model Report (AMR) documents the development of an expected-case model for unsaturated zone (UZ) flow and transport that will be described in terms of the representativeness of models of the natural system. The expected-case model will provide an evaluation of the effectiveness of the natural barriers, assess the impact of conservatism in the Total System Performance Assessment (TSPA), and support the development of further models and analyses for public confidence building. The present models used in ''Total System Performance Assessment for the Site Recommendation'' (Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M and O) 2000 [1532461]) underestimate the natural-barrier performance because of conservative assumptions and parameters and do not adequately address uncertainty and alternative models. The development of an expected case model for the UZ natural barrier addresses issues regarding flow-pattern analysis and modeling that had previously been treated conservatively. This is in line with the Repository Safety Strategy (RSS) philosophy of treating conservatively those aspects of the UZ flow and transport system that are not important for achieving regulatory dose (CRWMS M and O 2000 [153246], Section 1.1.1). The development of an expected case model for the UZ also provides defense-in-depth in areas requiring further analysis of uncertainty and alternative models. In general, the value of the conservative case is to provide a more easily defensible TSPA for behavior of UZ flow and transport processes at Yucca Mountain. This AMR has been prepared in accordance with the ''Technical Work Plan for Unsaturated Zone (UZ) Flow and Transport Process Model Report'' (Bechtel SAIC Company (BSC) 2001 [155051], Section 1.3 - Work Package 4301213UMG). The work scope is to examine the data and current models of flow and transport in the Yucca Mountain UZ to identify models and analyses where conservatism may be

  11. [Sensitivity and specificity of the cerebral blood flow reactions to acupuncture in the newborn infants presenting with hypoxic ischemic encephalopathy].

    Science.gov (United States)

    Filonenko, A V; Vasilenko, A M; Khan, M A

    2015-01-01

    To evaluate the effects of acupuncture integrated into the standard therapy, the condition of cerebral blood flow, and other syndromes associated with cerebral ischemia in the newborn infants. MATERIAL AND METHODS. A total of 131 pairs of puerperae and newborns with hypoxic ischemic encephalopathy were divided into four treatment groups. 34 children of the first group were given standard therapy (control), in the second group comprised of 33 mothers and children the standard treatment was supplemented by acupuncture, the third group included only 32 mothers given the acupuncture treatment alone, and the fourth group contained only 32 newborn infants treated by acupuncture. Each course of acupuncture treatment consisted of five sessions. Sensitivity and specificity of cerebral blood flow reactions were determined based on the results of the ROC-analysis and the area under the curve before and after the treatment. The treatment with the use of acupuncture greatly improved the cerebrospinal hemodynamics (p newborn babies. The high level of sensitivity (84.4-94.8%) associated with good specificity makes it possible to distinguish between the true positive and true negative cases. Acupuncture integrated into the treatment of "mother-baby" pairs presenting with hypoxic ischemic encephalopathy can be used to improve the initially low level of cerebral blood flow in neonates presenting with this condition.

  12. Robust-mode analysis of hydrodynamic flows

    Science.gov (United States)

    Roy, Sukesh; Gord, James R.; Hua, Jia-Chen; Gunaratne, Gemunu H.

    2017-04-01

    The emergence of techniques to extract high-frequency high-resolution data introduces a new avenue for modal decomposition to assess the underlying dynamics, especially of complex flows. However, this task requires the differentiation of robust, repeatable flow constituents from noise and other irregular features of a flow. Traditional approaches involving low-pass filtering and principle components analysis have shortcomings. The approach outlined here, referred to as robust-mode analysis, is based on Koopman decomposition. Three applications to (a) a counter-rotating cellular flame state, (b) variations in financial markets, and (c) turbulent injector flows are provided.

  13. Modular Control Flow Analysis for Libraries

    DEFF Research Database (Denmark)

    Probst, Christian W.

    2002-01-01

    One problem in analyzing object oriented languages is that the exact control flow graph is not known statically due to dynamic dispatching. However, this is needed in order to apply the large class of known interprocedural analysis. Control Flow Analysis in the object oriented setting aims...

  14. Sensitivity Analysis of Viscoelastic Structures

    Directory of Open Access Journals (Sweden)

    A.M.G. de Lima

    2006-01-01

    Full Text Available In the context of control of sound and vibration of mechanical systems, the use of viscoelastic materials has been regarded as a convenient strategy in many types of industrial applications. Numerical models based on finite element discretization have been frequently used in the analysis and design of complex structural systems incorporating viscoelastic materials. Such models must account for the typical dependence of the viscoelastic characteristics on operational and environmental parameters, such as frequency and temperature. In many applications, including optimal design and model updating, sensitivity analysis based on numerical models is a very usefull tool. In this paper, the formulation of first-order sensitivity analysis of complex frequency response functions is developed for plates treated with passive constraining damping layers, considering geometrical characteristics, such as the thicknesses of the multi-layer components, as design variables. Also, the sensitivity of the frequency response functions with respect to temperature is introduced. As an example, response derivatives are calculated for a three-layer sandwich plate and the results obtained are compared with first-order finite-difference approximations.

  15. Numerical modeling and sensitivity analysis of seawater intrusion in a dual-permeability coastal karst aquifer with conduit networks

    Directory of Open Access Journals (Sweden)

    Z. Xu

    2018-01-01

    Full Text Available Long-distance seawater intrusion has been widely observed through the subsurface conduit system in coastal karst aquifers as a source of groundwater contaminant. In this study, seawater intrusion in a dual-permeability karst aquifer with conduit networks is studied by the two-dimensional density-dependent flow and transport SEAWAT model. Local and global sensitivity analyses are used to evaluate the impacts of boundary conditions and hydrological characteristics on modeling seawater intrusion in a karst aquifer, including hydraulic conductivity, effective porosity, specific storage, and dispersivity of the conduit network and of the porous medium. The local sensitivity analysis evaluates the parameters' sensitivities for modeling seawater intrusion, specifically in the Woodville Karst Plain (WKP. A more comprehensive interpretation of parameter sensitivities, including the nonlinear relationship between simulations and parameters, and/or parameter interactions, is addressed in the global sensitivity analysis. The conduit parameters and boundary conditions are important to the simulations in the porous medium because of the dynamical exchanges between the two systems. The sensitivity study indicates that salinity and head simulations in the karst features, such as the conduit system and submarine springs, are critical for understanding seawater intrusion in a coastal karst aquifer. The evaluation of hydraulic conductivity sensitivity in the continuum SEAWAT model may be biased since the conduit flow velocity is not accurately calculated by Darcy's equation as a function of head difference and hydraulic conductivity. In addition, dispersivity is no longer an important parameter in an advection-dominated karst aquifer with a conduit system, compared to the sensitivity results in a porous medium aquifer. In the end, the extents of seawater intrusion are quantitatively evaluated and measured under different scenarios with the variabilities of

  16. Subsurface stormflow modeling with sensitivity analysis using a Latin-hypercube sampling technique

    International Nuclear Information System (INIS)

    Gwo, J.P.; Toran, L.E.; Morris, M.D.; Wilson, G.V.

    1994-09-01

    Subsurface stormflow, because of its dynamic and nonlinear features, has been a very challenging process in both field experiments and modeling studies. The disposal of wastes in subsurface stormflow and vadose zones at Oak Ridge National Laboratory, however, demands more effort to characterize these flow zones and to study their dynamic flow processes. Field data and modeling studies for these flow zones are relatively scarce, and the effect of engineering designs on the flow processes is poorly understood. On the basis of a risk assessment framework and a conceptual model for the Oak Ridge Reservation area, numerical models of a proposed waste disposal site were built, and a Latin-hypercube simulation technique was used to study the uncertainty of model parameters. Four scenarios, with three engineering designs, were simulated, and the effectiveness of the engineering designs was evaluated. Sensitivity analysis of model parameters suggested that hydraulic conductivity was the most influential parameter. However, local heterogeneities may alter flow patterns and result in complex recharge and discharge patterns. Hydraulic conductivity, therefore, may not be used as the only reference for subsurface flow monitoring and engineering operations. Neither of the two engineering designs, capping and French drains, was found to be effective in hydrologically isolating downslope waste trenches. However, pressure head contours indicated that combinations of both designs may prove more effective than either one alone

  17. Boolean logic analysis for flow regime recognition of gas–liquid horizontal flow

    International Nuclear Information System (INIS)

    Ramskill, Nicholas P; Wang, Mi

    2011-01-01

    In order to develop a flowmeter for the accurate measurement of multiphase flows, it is of the utmost importance to correctly identify the flow regime present to enable the selection of the optimal method for metering. In this study, the horizontal flow of air and water in a pipeline was studied under a multitude of conditions using electrical resistance tomography but the flow regimes that are presented in this paper have been limited to plug and bubble air–water flows. This study proposes a novel method for recognition of the prevalent flow regime using only a fraction of the data, thus rendering the analysis more efficient. By considering the average conductivity of five zones along the central axis of the tomogram, key features can be identified, thus enabling the recognition of the prevalent flow regime. Boolean logic and frequency spectrum analysis has been applied for flow regime recognition. Visualization of the flow using the reconstructed images provides a qualitative comparison between different flow regimes. Application of the Boolean logic scheme enables a quantitative comparison of the flow patterns, thus reducing the subjectivity in the identification of the prevalent flow regime

  18. OPR1000 RCP Flow Coastdown Analysis using SPACE Code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong-Hyuk; Kim, Seyun [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    The Korean nuclear industry developed a thermal-hydraulic analysis code for the safety analysis of PWRs, named SPACE(Safety and Performance Analysis Code for Nuclear Power Plant). Current loss of flow transient analysis of OPR1000 uses COAST code to calculate transient RCS(Reactor Coolant System) flow. The COAST code calculates RCS loop flow using pump performance curves and RCP(Reactor Coolant Pump) inertia. In this paper, SPACE code is used to reproduce RCS flowrates calculated by COAST code. The loss of flow transient is transient initiated by reduction of forced reactor coolant circulation. Typical loss of flow transients are complete loss of flow(CLOF) and locked rotor(LR). OPR1000 RCP flow coastdown analysis was performed using SPACE using simplified nodalization. Complete loss of flow(4 RCP trip) was analyzed. The results show good agreement with those from COAST code, which is CE code for calculating RCS flow during loss of flow transients. Through this study, we confirmed that SPACE code can be used instead of COAST code for RCP flow coastdown analysis.

  19. Flow ripple reduction of an axial piston pump by a combination of cross-angle and pressure relief grooves: Analysis and optimization

    International Nuclear Information System (INIS)

    Xu, Bing; Ye, Shaogan; Zhang, Junhui; Zhang, Chunfeng

    2016-01-01

    This paper investigates the potential of flow ripple reduction of an axial piston pump by a combination of cross-angle and pressure relief grooves. A dynamic model is developed to analyze the pumping dynamics of the pump and validated by experimental results. The effects of cross-angle on the flow ripples in the outlet and inlet ports, and the piston chamber pressure are investigated. The effects of pressure relief grooves on the optimal solutions obtained by a multi-objective optimization method are identified. A sensitivity analysis is performed to investigate the sensitivity of cross-angle to different working conditions. The results reveal that the flow ripples from the optimal solutions are smaller using the cross-angle and pressure relief grooves than those using the cross-angle and ordinary precompression and decompression angles and the cross-angle can be smaller. In addition, when the optimal design is used, the outlet flow ripples sensitivity can be reduced significantly.

  20. ANALYSIS AND ACCOUNTING OF TOTAL CASH FLOW

    Directory of Open Access Journals (Sweden)

    MELANIA ELENA MICULEAC

    2012-01-01

    Full Text Available In order to reach the objective of supplying some relevant information regarding the liquidity inflows and outflows during a financial exercise, the total cash flow analysis must include the analysis of result cashable from operation, of payments and receipts related to the investment and of financing decisions of the last exercise, as well as the analysis of treasury variation (of cash items. The management of total cash flows ensures the correlation of current liquidness flows as consequence of receipts with the payments ’flows, in order to provide payment continuity of mature obligations.

  1. Sensitivity analysis of EQ3

    International Nuclear Information System (INIS)

    Horwedel, J.E.; Wright, R.Q.; Maerker, R.E.

    1990-01-01

    A sensitivity analysis of EQ3, a computer code which has been proposed to be used as one link in the overall performance assessment of a national high-level waste repository, has been performed. EQ3 is a geochemical modeling code used to calculate the speciation of a water and its saturation state with respect to mineral phases. The model chosen for the sensitivity analysis is one which is used as a test problem in the documentation of the EQ3 code. Sensitivities are calculated using both the CHAIN and ADGEN options of the GRESS code compiled under G-float FORTRAN on the VAX/VMS and verified by perturbation runs. The analyses were performed with a preliminary Version 1.0 of GRESS which contains several new algorithms that significantly improve the application of ADGEN. Use of ADGEN automates the implementation of the well-known adjoint technique for the efficient calculation of sensitivities of a given response to all the input data. Application of ADGEN to EQ3 results in the calculation of sensitivities of a particular response to 31,000 input parameters in a run time of only 27 times that of the original model. Moreover, calculation of the sensitivities for each additional response increases this factor by only 2.5 percent. This compares very favorably with a running-time factor of 31,000 if direct perturbation runs were used instead. 6 refs., 8 tabs

  2. Analysis of basophil activation by flow cytometry in pediatric house dust mite allergy.

    Science.gov (United States)

    González-Muñoz, Miguel; Villota, Julian; Moneo, Ignacio

    2008-06-01

    Detection of allergen-induced basophil activation by flow cytometry has been shown to be a useful tool for allergy diagnosis. The aim of this study was to assess the potential of this technique for the diagnosis of pediatric house dust mite allergy. Quantification of total and specific IgE and basophil activation test were performed to evaluate mite allergic (n = 24), atopic (n = 23), and non-allergic children (n = 9). Allergen-induced basophil activation was detected as a CD63-upregulation. Receiver operating characteristics (ROC) curve analysis was performed to calculate the optimal cut-off value of activated basophils discriminating mite allergic and non-allergic children. ROC curve analysis yielded a threshold value of 18% activated basophils when mite-sensitized and atopic children were studied [area under the curve (AUC) = 0.99, 95% confidence interval (CI) = 0.97-1.01, p 43 kU/l) values for Dermatophagoides pteronyssinus allergen. They also showed positive prick (wheal diameter >1.0 cm) and basophil activation (>87%) tests and high specific IgE (>100 kU/l) with shrimp allergen. Shrimp sensitization was demonstrated by high levels of Pen a 1-specific IgE (>100 kU/l). Cross-reactivity between mite and shrimp was confirmed by fluorescence enzyme immunoassay (FEIA-CAP) inhibition study in these two cases. This study demonstrated that the analysis of allergen-induced CD63 upregulation by flow cytometry is a reliable tool for diagnosis of mite allergy in pediatric patients, with sensitivity similar to routine diagnostic tests and a higher specificity. Furthermore, this method can provide additional information in case of disagreement between in vivo and in vitro test results.

  3. SENSIT: a cross-section and design sensitivity and uncertainty analysis code

    International Nuclear Information System (INIS)

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE

  4. Gaseous slip flow analysis of a micromachined flow sensor for ultra small flow applications

    OpenAIRE

    Jang, Jaesung; Wereley, Steven

    2007-01-01

    The velocity slip of a fluid at a wall is one of the most typical phenomena in microscale gas flows. This paper presents a flow analysis considering the velocity slip in a capacitive micro gas flow sensor based on pressure difference measurements along a microchannel. The tangential momentum accommodation coefficient (TMAC) measurements of a particular channel wall in planar microchannels will be presented while the previous micro gas flow studies have been based on the same TMACs on both wal...

  5. Hybrid Information Flow Analysis for Programs with Arrays

    Directory of Open Access Journals (Sweden)

    Gergö Barany

    2016-07-01

    Full Text Available Information flow analysis checks whether certain pieces of (confidential data may affect the results of computations in unwanted ways and thus leak information. Dynamic information flow analysis adds instrumentation code to the target software to track flows at run time and raise alarms if a flow policy is violated; hybrid analyses combine this with preliminary static analysis. Using a subset of C as the target language, we extend previous work on hybrid information flow analysis that handled pointers to scalars. Our extended formulation handles arrays, pointers to array elements, and pointer arithmetic. Information flow through arrays of pointers is tracked precisely while arrays of non-pointer types are summarized efficiently. A prototype of our approach is implemented using the Frama-C program analysis and transformation framework. Work on a full machine-checked proof of the correctness of our approach using Isabelle/HOL is well underway; we present the existing parts and sketch the rest of the correctness argument.

  6. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  7. Multiple predictor smoothing methods for sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  8. Unsaturated Zone Flow Patterns and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    C. Ahlers

    2001-10-17

    This Analysis/Model Report (AMR) documents the development of an expected-case model for unsaturated zone (UZ) flow and transport that will be described in terms of the representativeness of models of the natural system. The expected-case model will provide an evaluation of the effectiveness of the natural barriers, assess the impact of conservatism in the Total System Performance Assessment (TSPA), and support the development of further models and analyses for public confidence building. The present models used in ''Total System Performance Assessment for the Site Recommendation'' (Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) 2000 [1532461]) underestimate the natural-barrier performance because of conservative assumptions and parameters and do not adequately address uncertainty and alternative models. The development of an expected case model for the UZ natural barrier addresses issues regarding flow-pattern analysis and modeling that had previously been treated conservatively. This is in line with the Repository Safety Strategy (RSS) philosophy of treating conservatively those aspects of the UZ flow and transport system that are not important for achieving regulatory dose (CRWMS M&O 2000 [153246], Section 1.1.1). The development of an expected case model for the UZ also provides defense-in-depth in areas requiring further analysis of uncertainty and alternative models. In general, the value of the conservative case is to provide a more easily defensible TSPA for behavior of UZ flow and transport processes at Yucca Mountain. This AMR has been prepared in accordance with the ''Technical Work Plan for Unsaturated Zone (UZ) Flow and Transport Process Model Report'' (Bechtel SAIC Company (BSC) 2001 [155051], Section 1.3 - Work Package 4301213UMG). The work scope is to examine the data and current models of flow and transport in the Yucca Mountain UZ to identify models and analyses

  9. LFSTAT - An R-Package for Low-Flow Analysis

    Science.gov (United States)

    Koffler, D.; Laaha, G.

    2012-04-01

    When analysing daily streamflow data focusing on low flow and drought, the state of the art is well documented in the Manual on Low-Flow Estimation and Prediction [1] published by the WMO. While it is clear what has to be done, it is not so clear how to preform the analysis and make the calculation as reproducible as possible. Our software solution expands the high preforming statistical open source software package R to analyse daily stream flow data focusing on low-flows. As command-line based programs are not everyone's preference, we also offer a plug-in for the R-Commander, an easy to use graphical user interface (GUI) to analyse data in R. Functionality includes estimation of the most important low-flow indices. Beside standardly used flow indices also BFI and Recession constants can be computed. The main applications of L-moment based Extreme value analysis and regional frequency analysis (RFA) are available. Calculation of streamflow deficits is another important feature. The most common graphics are prepared and can easily be modified according to the users preferences. Graphics include hydrographs for different periods, flexible streamflow deficit plots, baseflow visualisation, flow duration curves as well as double mass curves just to name a few. The package uses a S3-class called lfobj (low-flow objects). Once this objects are created, analysis can be preformed by mouse-click, and a script can be saved to make the analysis easy reproducible. At the moment we are offering implementation of all major methods proposed in the WMO manual on Low-flow Estimation and Predictions. Future plans include e.g. report export in odt-file using odf-weave. We hope to offer a tool to ease and structure the analysis of stream flow data focusing on low-flows and to make analysis transparent and communicable. The package is designed for hydrological research and water management practice, but can also be used in teaching students the first steps in low-flow hydrology.

  10. An analysis of sensitivity and uncertainty associated with the use of the HSPF model for EIA applications

    Energy Technology Data Exchange (ETDEWEB)

    Biftu, G.F.; Beersing, A.; Wu, S.; Ade, F. [Golder Associates, Calgary, AB (Canada)

    2005-07-01

    An outline of a new approach to assessing the sensitivity and uncertainty associated with surface water modelling results using Hydrological Simulation Program-Fortran (HSPF) was presented, as well as the results of a sensitivity and uncertainty analysis. The HSPF model is often used to characterize the hydrological processes in watersheds within the oil sands region. Typical applications of HSPF included calibration of the model parameters using data from gauged watersheds, as well as validation of calibrated models with data sets. Additionally, simulations are often conducted to make flow predictions to support the environmental impact assessment (EIA) process. However, a key aspect of the modelling components of the EIA process is the sensitivity and uncertainty of the modelling results as compared to model parameters. Many of the variations in the HSPF model's outputs are caused by a small number of model parameters and outputs. A sensitivity analysis was performed to identify and focus on key parameters and assumptions that have the most influence on the model's outputs. Analysis entailed varying each parameter in turn, within a range, and examining the resulting relative changes in the model outputs. This analysis consisted of the selection of probability distributions to characterize the uncertainty in the model's key sensitive parameters, as well as the use of Monte Carlo and HSPF simulation to determine the uncertainty in model outputs. tabs, figs.

  11. Sensitivity analysis for heat diffusion in a fin on a nuclear fuel element

    International Nuclear Information System (INIS)

    Tito, Max Werner de Carvalho

    2001-11-01

    projected to gas-cooled nuclear reactors to compensate the low coolant thermal transport efficiency. The model is described by the temperature distribution equation and the further specific boundary conditions. The adjoint system is used to determine the sensitivity coefficients to the case of interest. Both, the direct model and the perturbative formalism resultant equations are solved. The heat flow rate on a point of the fin and the average temperature excess were the response functionals studied. The half thickness, the thermal conductivity and heat transfer coefficients and the heat flow from the base material were the parameters of interest to the sensitivity analysis. The results obtained through the perturbative method and the direct variation presented, in a general form and within acceptable physical limits, good concordance and excellent representativeness to the analyzed cases. It evidences that the differential formalism is an important tool to the sensitivity analysis and also it validates the application of the methodology in heat transmission problems on extended surfaces. The method proves to be necessary and efficient while elaborating thermal engineering projects. (author)

  12. Study of relationship between MUF correlation and detection sensitivity of statistical analysis

    International Nuclear Information System (INIS)

    Tamura, Toshiaki; Ihara, Hitoshi; Yamamoto, Yoichi; Ikawa, Koji

    1989-11-01

    Various kinds of statistical analysis are proposed to NRTA (Near Real Time Materials Accountancy) which was devised to satisfy the timeliness goal of one of the detection goals of IAEA. It will be presumed that different statistical analysis results will occur between the case of considered rigorous error propagation (with MUF correlation) and the case of simplified error propagation (without MUF correlation). Therefore, measurement simulation and decision analysis were done using flow simulation of 800 MTHM/Y model reprocessing plant, and relationship between MUF correlation and detection sensitivity and false alarm of statistical analysis was studied. Specific character of material accountancy for 800 MTHM/Y model reprocessing plant was grasped by this simulation. It also became clear that MUF correlation decreases not only false alarm but also detection probability for protracted loss in case of CUMUF test and Page's test applied to NRTA. (author)

  13. Uncertainty and Sensitivity Analysis Results Obtained in the 1996 Performance Assessment for the Waste Isolation Pilot Plant

    International Nuclear Information System (INIS)

    Bean, J.E.; Berglund, J.W.; Davis, F.J.; Economy, K.; Garner, J.W.; Helton, J.C.; Johnson, J.D.; MacKinnon, R.J.; Miller, J.; O'Brien, D.G.; Ramsey, J.L.; Schreiber, J.D.; Shinta, A.; Smith, L.N.; Stockman, C.; Stoelzel, D.M.; Vaughn, P.

    1998-01-01

    The Waste Isolation Pilot Plant (WPP) is located in southeastern New Mexico and is being developed by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. A detailed performance assessment (PA) for the WIPP was carried out in 1996 and supports an application by the DOE to the U.S. Environmental Protection Agency (EPA) for the certification of the WIPP for the disposal of TRU waste. The 1996 WIPP PA uses a computational structure that maintains a separation between stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, with stochastic uncertainty arising from the many possible disruptions that could occur over the 10,000 yr regulatory period that applies to the WIPP and subjective uncertainty arising from the imprecision with which many of the quantities required in the PA are known. Important parts of this structure are (1) the use of Latin hypercube sampling to incorporate the effects of subjective uncertainty, (2) the use of Monte Carlo (i.e., random) sampling to incorporate the effects of stochastic uncertainty, and (3) the efficient use of the necessarily limited number of mechanistic calculations that can be performed to support the analysis. The use of Latin hypercube sampling generates a mapping from imprecisely known analysis inputs to analysis outcomes of interest that provides both a display of the uncertainty in analysis outcomes (i.e., uncertainty analysis) and a basis for investigating the effects of individual inputs on these outcomes (i.e., sensitivity analysis). The sensitivity analysis procedures used in the PA include examination of scatterplots, stepwise regression analysis, and partial correlation analysis. Uncertainty and sensitivity analysis results obtained as part of the 1996 WIPP PA are presented and discussed. Specific topics considered include two phase flow in the vicinity of the repository, radionuclide release from the repository, fluid flow and radionuclide

  14. Uncertainty and Sensitivity Analysis Results Obtained in the 1996 Performance Assessment for the Waste Isolation Pilot Plant

    Energy Technology Data Exchange (ETDEWEB)

    Bean, J.E.; Berglund, J.W.; Davis, F.J.; Economy, K.; Garner, J.W.; Helton, J.C.; Johnson, J.D.; MacKinnon, R.J.; Miller, J.; O' Brien, D.G.; Ramsey, J.L.; Schreiber, J.D.; Shinta, A.; Smith, L.N.; Stockman, C.; Stoelzel, D.M.; Vaughn, P.

    1998-09-01

    The Waste Isolation Pilot Plant (WPP) is located in southeastern New Mexico and is being developed by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. A detailed performance assessment (PA) for the WIPP was carried out in 1996 and supports an application by the DOE to the U.S. Environmental Protection Agency (EPA) for the certification of the WIPP for the disposal of TRU waste. The 1996 WIPP PA uses a computational structure that maintains a separation between stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, with stochastic uncertainty arising from the many possible disruptions that could occur over the 10,000 yr regulatory period that applies to the WIPP and subjective uncertainty arising from the imprecision with which many of the quantities required in the PA are known. Important parts of this structure are (1) the use of Latin hypercube sampling to incorporate the effects of subjective uncertainty, (2) the use of Monte Carlo (i.e., random) sampling to incorporate the effects of stochastic uncertainty, and (3) the efficient use of the necessarily limited number of mechanistic calculations that can be performed to support the analysis. The use of Latin hypercube sampling generates a mapping from imprecisely known analysis inputs to analysis outcomes of interest that provides both a display of the uncertainty in analysis outcomes (i.e., uncertainty analysis) and a basis for investigating the effects of individual inputs on these outcomes (i.e., sensitivity analysis). The sensitivity analysis procedures used in the PA include examination of scatterplots, stepwise regression analysis, and partial correlation analysis. Uncertainty and sensitivity analysis results obtained as part of the 1996 WIPP PA are presented and discussed. Specific topics considered include two phase flow in the vicinity of the repository, radionuclide release from the repository, fluid flow and radionuclide

  15. LBLOCA sensitivity analysis using meta models

    International Nuclear Information System (INIS)

    Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.

    2014-01-01

    This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)

  16. Sensitivity analysis in life cycle assessment

    NARCIS (Netherlands)

    Groen, E.A.; Heijungs, R.; Bokkers, E.A.M.; Boer, de I.J.M.

    2014-01-01

    Life cycle assessments require many input parameters and many of these parameters are uncertain; therefore, a sensitivity analysis is an essential part of the final interpretation. The aim of this study is to compare seven sensitivity methods applied to three types of case stud-ies. Two

  17. Sensitivity analysis for matched pair analysis of binary data: From worst case to average case analysis.

    Science.gov (United States)

    Hasegawa, Raiden; Small, Dylan

    2017-12-01

    In matched observational studies where treatment assignment is not randomized, sensitivity analysis helps investigators determine how sensitive their estimated treatment effect is to some unmeasured confounder. The standard approach calibrates the sensitivity analysis according to the worst case bias in a pair. This approach will result in a conservative sensitivity analysis if the worst case bias does not hold in every pair. In this paper, we show that for binary data, the standard approach can be calibrated in terms of the average bias in a pair rather than worst case bias. When the worst case bias and average bias differ, the average bias interpretation results in a less conservative sensitivity analysis and more power. In many studies, the average case calibration may also carry a more natural interpretation than the worst case calibration and may also allow researchers to incorporate additional data to establish an empirical basis with which to calibrate a sensitivity analysis. We illustrate this with a study of the effects of cellphone use on the incidence of automobile accidents. Finally, we extend the average case calibration to the sensitivity analysis of confidence intervals for attributable effects. © 2017, The International Biometric Society.

  18. Sensitivity Analysis of Heavy Fuel Oil Spray and Combustion under Low-Speed Marine Engine-Like Conditions

    Directory of Open Access Journals (Sweden)

    Lei Zhou

    2017-08-01

    Full Text Available On account of their high power, thermal efficiency, good reliability, safety, and durability, low-speed two-stroke marine diesel engines are used as the main drive devices for large fuel and cargo ships. Most marine engines use heavy fuel oil (HFO as the primary fuel, however, the physical and chemical characteristics of HFO are not clear because of its complex thermophysical properties. The present study was conducted to investigate the effects of fuel properties on the spray and combustion characteristics under two-stroke marine engine-like conditions via a sensitivity analysis. The sensitivity analysis of fuel properties for non-reacting and reacting simulations are conducted by comparing two fuels having different physical properties, such as fuel density, dynamic viscosity, critical temperature, and surface tension. The performances of the fuels are comprehensively studied under different ambient pressures, ambient temperatures, fuel temperatures, and swirl flow conditions. From the results of non-reacting simulations of HFO and diesel fuel properties in a constant volume combustion chamber, it can be found that the increase of the ambient pressure promotes fuel evaporation, resulting in a reduction in the steady liquid penetration of both diesel and HFO; however, the difference in the vapor penetrations of HFO and diesel reduces. Increasing the swirl flow significantly influences the atomization of both HFO and diesel, especially the liquid distribution of diesel. It is also found that the ambient temperature and fuel temperature have the negative effects on Sauter mean diameter (SMD distribution. For low-speed marine engines, the combustion performance of HFO is not sensitive to activation energy in a certain range of activation energy. At higher engine speed, the difference in the effects of different activation energies on the in-cylinder pressure increases. The swirl flow in the cylinder can significantly promote fuel evaporation and

  19. High order depletion sensitivity analysis

    International Nuclear Information System (INIS)

    Naguib, K.; Adib, M.; Morcos, H.N.

    2002-01-01

    A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations

  20. A flood-based information flow analysis and network minimization method for gene regulatory networks.

    Science.gov (United States)

    Pavlogiannis, Andreas; Mozhayskiy, Vadim; Tagkopoulos, Ilias

    2013-04-24

    Biological networks tend to have high interconnectivity, complex topologies and multiple types of interactions. This renders difficult the identification of sub-networks that are involved in condition- specific responses. In addition, we generally lack scalable methods that can reveal the information flow in gene regulatory and biochemical pathways. Doing so will help us to identify key participants and paths under specific environmental and cellular context. This paper introduces the theory of network flooding, which aims to address the problem of network minimization and regulatory information flow in gene regulatory networks. Given a regulatory biological network, a set of source (input) nodes and optionally a set of sink (output) nodes, our task is to find (a) the minimal sub-network that encodes the regulatory program involving all input and output nodes and (b) the information flow from the source to the sink nodes of the network. Here, we describe a novel, scalable, network traversal algorithm and we assess its potential to achieve significant network size reduction in both synthetic and E. coli networks. Scalability and sensitivity analysis show that the proposed method scales well with the size of the network, and is robust to noise and missing data. The method of network flooding proves to be a useful, practical approach towards information flow analysis in gene regulatory networks. Further extension of the proposed theory has the potential to lead in a unifying framework for the simultaneous network minimization and information flow analysis across various "omics" levels.

  1. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  2. Rapid and sensitive lateral flow immunoassay method for determining alpha fetoprotein in serum using europium (III) chelate microparticles-based lateral flow test strips

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Rong-Liang; Xu, Xu-Ping; Liu, Tian-Cai; Zhou, Jian-Wei; Wang, Xian-Guo; Ren, Zhi-Qi [Institute of Antibody Engineering, School of Biotechnology, Southern Medical University, Guangzhou 510515, Guangdong (China); Hao, Fen [DaAn Gene Co. Ltd. of Sun Yat-sen University, 19 Xiangshan Road, Guangzhou 510515 (China); Wu, Ying-Song, E-mail: wg@smu.edu.cn [Institute of Antibody Engineering, School of Biotechnology, Southern Medical University, Guangzhou 510515, Guangdong (China)

    2015-09-03

    Alpha-fetoprotein (AFP), a primary marker for many diseases including various cancers, is important in clinical tumor diagnosis and antenatal screening. Most immunoassays provide high sensitivity and accuracy for determining AFP, but they are expensive, often complex, time-consuming procedures. A simple and rapid point-of-care system that integrates Eu (III) chelate microparticles with lateral flow immunoassay (LFIA) has been developed to determine AFP in serum with an assay time of 15 min. The approach is based on a sandwich immunoassay performed on lateral flow test strips. A fluorescence strip reader was used to measure the fluorescence peak heights of the test line (H{sub T}) and the control line (H{sub C}); the H{sub T}/H{sub C} ratio was used for quantitation. The Eu (III) chelate microparticles-based LFIA assay exhibited a wide linear range (1.0–1000 IU mL{sup −1}) for AFP with a low limit of detection (0.1 IU mL{sup −1}) based on 5ul of serum. Satisfactory specificity and accuracy were demonstrated and the intra- and inter-assay coefficients of variation (CV) for AFP were both <10%. Furthermore, in the analysis of human serum samples, excellent correlation (n = 284, r = 0.9860, p < 0.0001) was obtained between the proposed method and a commercially available CLIA kit. Results indicated that the Eu (III) chelate microparticles-based LFIA system provided a rapid, sensitive and reliable method for determining AFP in serum, indicating that it would be suitable for development in point-of-care testing. - Highlights: • Europium (III) chelate microparticles was used as a label for LIFA. • Quantitative detection by using H{sub T}/H{sub C} ratio was achieved. • LIFA for simple and rapid AFP detection in human serum. • The sensitivity and linearity was more excellent compared with QD-based ICTS. • This method could be developed for rapid point-of-care screening.

  3. Flow induced dispersion analysis rapidly quantifies proteins in human plasma samples

    DEFF Research Database (Denmark)

    Poulsen, Nicklas N; Andersen, Nina Z; Østergaard, Jesper

    2015-01-01

    Rapid and sensitive quantification of protein based biomarkers and drugs is a substantial challenge in diagnostics and biopharmaceutical drug development. Current technologies, such as ELISA, are characterized by being slow (hours), requiring relatively large amounts of sample and being subject...... to cumbersome and expensive assay development. In this work a new approach for quantification based on changes in diffusivity is presented. The apparent diffusivity of an indicator molecule interacting with the protein of interest is determined by Taylor Dispersion Analysis (TDA) in a hydrodynamic flow system...... in a blood plasma matrix), fully automated, and being subject to a simple assay development. FIDA is demonstrated for quantification of the protein Human Serum Albumin (HSA) in human plasma as well as for quantification of an antibody against HSA. The sensitivity of the FIDA assay depends on the indicator...

  4. Frontier Assignment for Sensitivity Analysis of Data Envelopment Analysis

    Science.gov (United States)

    Naito, Akio; Aoki, Shingo; Tsuji, Hiroshi

    To extend the sensitivity analysis capability for DEA (Data Envelopment Analysis), this paper proposes frontier assignment based DEA (FA-DEA). The basic idea of FA-DEA is to allow a decision maker to decide frontier intentionally while the traditional DEA and Super-DEA decide frontier computationally. The features of FA-DEA are as follows: (1) provides chances to exclude extra-influential DMU (Decision Making Unit) and finds extra-ordinal DMU, and (2) includes the function of the traditional DEA and Super-DEA so that it is able to deal with sensitivity analysis more flexibly. Simple numerical study has shown the effectiveness of the proposed FA-DEA and the difference from the traditional DEA.

  5. Groundwater pathway sensitivity analysis and hydrogeologic parameters identification for waste disposal in porous media

    International Nuclear Information System (INIS)

    Yu, C.

    1986-01-01

    The migration of radionuclides in a geologic medium is controlled by the hydrogeologic parameters of the medium such as dispersion coefficient, pore water velocity, retardation factor, degradation rate, mass transfer coefficient, water content, and fraction of dead-end pores. These hydrogeologic parameters are often used to predict the migration of buried wastes in nuclide transport models such as the conventional advection-dispersion model, the mobile-immobile pores model, the nonequilibrium adsorption-desorption model, and the general group transfer concentration model. One of the most important factors determining the accuracy of predicting waste migration is the accuracy of the parameter values used in the model. More sensitive parameters have a greater influence on the results and hence should determined (measured or estimated) more accurately than less sensitive parameters. A formal parameter sensitivity analysis is carried out in this paper. Parameter identification techniques to determine the hydrogeologic parameters of the flow system are discussed. The dependence of the accuracy of the estimated parameters upon the parameter sensitivity is also discussed

  6. Sensitivity analysis in optimization and reliability problems

    International Nuclear Information System (INIS)

    Castillo, Enrique; Minguez, Roberto; Castillo, Carmen

    2008-01-01

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods

  7. Sensitivity analysis in optimization and reliability problems

    Energy Technology Data Exchange (ETDEWEB)

    Castillo, Enrique [Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. Castros s/n., 39005 Santander (Spain)], E-mail: castie@unican.es; Minguez, Roberto [Department of Applied Mathematics, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: roberto.minguez@uclm.es; Castillo, Carmen [Department of Civil Engineering, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: mariacarmen.castillo@uclm.es

    2008-12-15

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods.

  8. Multifractal Analysis for the Teichmueller Flow

    Energy Technology Data Exchange (ETDEWEB)

    Meson, Alejandro M., E-mail: meson@iflysib.unlp.edu.ar; Vericat, Fernando, E-mail: vericat@iflysib.unlp.edu.ar [Instituto de Fisica de Liquidos y Sistemas Biologicos (IFLYSIB) CCT-CONICET, La Plata-UNLP and Grupo de Aplicaciones Matematicas y Estadisticas de la Facultad de Ingenieria (GAMEFI) UNLP (Argentina)

    2012-03-15

    We present a multifractal description for Teichmueller flows. A key ingredient to do this is the Rauzy-Veech-Zorich reduction theory, which allows to treat the problem in the setting of suspension flows over subshifts. To perform the multifractal analysis we implement a thermodynamic formalism for suspension flows over countable alphabet subshifts a bit different from that developed by Barreira and Iommi.

  9. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  10. Systematic Sensitivity Analysis of Metabolic Controllers During Reductions in Skeletal Muscle Blood Flow

    Science.gov (United States)

    Radhakrishnan, Krishnan; Cabrera, Marco

    2000-01-01

    An acute reduction in oxygen delivery to skeletal muscle is generally associated with profound derangements in substrate metabolism. Given the complexity of the human bioenergetic system and its components, it is difficult to quantify the interaction of cellular metabolic processes to maintain ATP homeostasis during stress (e.g., hypoxia, ischemia, and exercise). Of special interest is the determination of mechanisms relating tissue oxygenation to observed metabolic responses at the tissue, organ, and whole body levels and the quantification of how changes in oxygen availability affect the pathways of ATP synthesis and their regulation. In this study, we apply a previously developed mathematical model of human bioenergetics to study effects of ischemia during periods of increased ATP turnover (e.g., exercise). By using systematic sensitivity analysis the oxidative phosphorylation rate was found to be the most important rate parameter affecting lactate production during ischemia under resting conditions. Here we examine whether mild exercise under ischemic conditions alters the relative importance of pathways and parameters previously obtained.

  11. Hydraulic head interpolation using ANFIS—model selection and sensitivity analysis

    Science.gov (United States)

    Kurtulus, Bedri; Flipo, Nicolas

    2012-01-01

    The aim of this study is to investigate the efficiency of ANFIS (adaptive neuro fuzzy inference system) for interpolating hydraulic head in a 40-km 2 agricultural watershed of the Seine basin (France). Inputs of ANFIS are Cartesian coordinates and the elevation of the ground. Hydraulic head was measured at 73 locations during a snapshot campaign on September 2009, which characterizes low-water-flow regime in the aquifer unit. The dataset was then split into three subsets using a square-based selection method: a calibration one (55%), a training one (27%), and a test one (18%). First, a method is proposed to select the best ANFIS model, which corresponds to a sensitivity analysis of ANFIS to the type and number of membership functions (MF). Triangular, Gaussian, general bell, and spline-based MF are used with 2, 3, 4, and 5 MF per input node. Performance criteria on the test subset are used to select the 5 best ANFIS models among 16. Then each is used to interpolate the hydraulic head distribution on a (50×50)-m grid, which is compared to the soil elevation. The cells where the hydraulic head is higher than the soil elevation are counted as "error cells." The ANFIS model that exhibits the less "error cells" is selected as the best ANFIS model. The best model selection reveals that ANFIS models are very sensitive to the type and number of MF. Finally, a sensibility analysis of the best ANFIS model with four triangular MF is performed on the interpolation grid, which shows that ANFIS remains stable to error propagation with a higher sensitivity to soil elevation.

  12. Unified Hall-Petch description of nano-grain nickel hardness, flow stress and strain rate sensitivity measurements

    Science.gov (United States)

    Armstrong, R. W.; Balasubramanian, N.

    2017-08-01

    It is shown that: (i) nano-grain nickel flow stress and hardness data at ambient temperature follow a Hall-Petch (H-P) relation over a wide range of grain size; and (ii) accompanying flow stress and strain rate sensitivity measurements follow an analogous H-P relationship for the reciprocal "activation volume", (1/v*) = (1/A*b) where A* is activation area. Higher temperature flow stress measurements show a greater than expected reduction both in the H-P kɛ and in v*. The results are connected with smaller nano-grain size (tested at very low imposed strain rates.

  13. Flows method in global analysis

    International Nuclear Information System (INIS)

    Duong Minh Duc.

    1994-12-01

    We study the gradient flows method for W r,p (M,N) where M and N are Riemannian manifold and r may be less than m/p. We localize some global analysis problem by constructing gradient flows which only change the value of any u in W r,p (M,N) in a local chart of M. (author). 24 refs

  14. SENSITIVITY ANALYSIS BY ARTIFICIAL NEURAL NETWORK (ANN OF VARIABLES THAT INFLUENCE THE DIAGONAL TWIST IN A PAPERBOARD INDUSTRIAL MACHINE

    Directory of Open Access Journals (Sweden)

    Guinter Neutzling Schneid

    2016-01-01

    Full Text Available The dimensional stability of the paper may change due to middle exchange moisture, releasing the latent stress acquired into the manufacturing process. One result of this tension release is the diagonal curl. This study aims to conduct a sensitivity analysis of the different input’s variables of an industrial paper machine, along with some laboratory measurements, in order to identify the importance in production of paperboard quality control and relate to the property of the paper called twist. A survey was made of the production history, relating to 2012, to observe the products with the highest quality losses. From this, they were correlated with the critical points of measurement profile in the machine cross direction and consequently with the paper. It was found some changes once the variables correlated with twist, referring to the three analyzes of the profile (tender side, middle and drive side. It was revealed, from the sensitivity analysis, that the most important and sensitive variables, respectively for the tender side, middle and drive side, were total flow from the top layer, vapor pressure in the 6th group of drying cylinders and mass flow side of the bottom layer of the formation of paperboard.

  15. The role of sensitivity analysis in assessing uncertainty

    International Nuclear Information System (INIS)

    Crick, M.J.; Hill, M.D.

    1987-01-01

    Outside the specialist world of those carrying out performance assessments considerable confusion has arisen about the meanings of sensitivity analysis and uncertainty analysis. In this paper we attempt to reduce this confusion. We then go on to review approaches to sensitivity analysis within the context of assessing uncertainty, and to outline the types of test available to identify sensitive parameters, together with their advantages and disadvantages. The views expressed in this paper are those of the authors; they have not been formally endorsed by the National Radiological Protection Board and should not be interpreted as Board advice

  16. Sensitivity analysis and uncertainties simulation of the migration of radionuclide in the system of geological disposal-CRP-GEORC model

    International Nuclear Information System (INIS)

    Su Rui; Wang Ju; Chen Weiming; Zong Zihua; Zhao Honggang

    2008-01-01

    CRP-GEORC concept model is an artificial system of geological disposal for High-Level radioactive waste. Sensitivity analysis and uncertainties simulation of the migration of radionuclide Se-79 and I-129 in the far field of this system by using GoldSim Code have been conducted. It can be seen from the simulation results that variables used to describe the geological features and characterization of groundwater flow are sensitive variables of whole geological disposal system. The uncertainties of parameters have remarkable influence on the simulation results. (authors)

  17. 2D Numerical Simulation and Sensitive Analysis of H-Darrieus Wind Turbine

    Directory of Open Access Journals (Sweden)

    Seyed Mohammad E. Saryazdi

    2018-02-01

    Full Text Available Recently, a lot of attention has been devoted to the use of Darrieus wind turbines in urban areas. The aerodynamics of a Darrieus turbine are very complex due to dynamic stall and changing forces on the turbine triggered by changing horizontal angles. In this study, the aerodynamics of H-rotor vertical axis wind turbine (VAWT has been studied using computational fluid dynamics via two different turbulence models. Shear stress transport (SST k-ω turbulence model was used to simulate a 2D unsteady model of the H-Darrieus turbine. In order to complete this simulation, sensitivity analysis of the effective turbine parameters such as solidity factor, airfoil shape, wind velocity and shaft diameter were done. To simulate the flow through the turbine, a 2D simplified computational domain has been generated. Then fine mesh for each case consisting of different turbulence models and dimensions has been generated. Each mesh in this simulation dependent on effective parameters consisted of domain size, mesh quality, time step and total revolution. The sliding mesh method was applied to evaluate the unsteady interaction between the stationary and rotating components. Previous works just simulated turbine, while in our study sensitivity analysis of effective parameters was done. The simulation results closely match the experimental data, providing an efficient and reliable foundation to study wind turbine aerodynamics. This also demonstrates computing the best value of the effective parameter. The sensitivity analysis revealed best value of the effective parameter that could be used in the process of designing turbine. This work provides the first step in developing an accurate 3D aerodynamic modeling of Darrieus wind turbines. Article History: Received :August 19th 2017; Received: December 15th 2017; Accepted: Januari 14th 2018; Available online How to Cite This Article: Saryazdi, S. M. E. and Boroushaki, M. (2018 2D Numerical Simulation and Sensitive

  18. Sensitivity studies on the multi-sensor conductivity probe measurement technique for two-phase flows

    Energy Technology Data Exchange (ETDEWEB)

    Worosz, Ted [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, 230 Reber Building, University Park, PA 16802 (United States); Bernard, Matt [The United States Nuclear Regulatory Commission, 11545 Rockville Pike, Rockville, MD 20852 (United States); Kong, Ran; Toptan, Aysenur [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, 230 Reber Building, University Park, PA 16802 (United States); Kim, Seungjin, E-mail: skim@psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, 230 Reber Building, University Park, PA 16802 (United States); Hoxie, Chris [The United States Nuclear Regulatory Commission, 11545 Rockville Pike, Rockville, MD 20852 (United States)

    2016-12-15

    Highlights: • Revised conductivity probe circuit to eliminate signal “ghosting” among sensors. • Higher sampling frequencies suggested for bubble number frequency and a{sub i} measurements. • Two-phase parameter sensitivity to measurement duration and bubble number investigated. • Sensors parallel to pipe wall recommended for symmetric bubble velocity measurements. • Sensor separation distance ratio (s/d) greater than four minimizes bubble velocity error. - Abstract: The objective of this study is to advance the local multi-sensor conductivity probe measurement technique through systematic investigation into several practical aspects of a conductivity probe measurement system. Firstly, signal “ghosting” among probe sensors is found to cause artificially high bubble velocity measurements and low interfacial area concentration (a{sub i}) measurements that depend on sampling frequency and sensor impedance. A revised electrical circuit is suggested to eliminate this artificial variability. Secondly, the sensitivity of the probe measurements to sampling frequency is investigated in 13 two-phase flow conditions with superficial liquid and gas velocities ranging from 1.00–5.00 m/s and 0.17–2.0 m/s, respectively. With increasing gas flow rate, higher sampling frequencies, greater than 100 kHz in some cases, are required to adequately capture the bubble number frequency and a{sub i} measurements. This trend is due to the increase in gas velocity and the transition to the slug flow regime. Thirdly, the sensitivity of the probe measurements to the measurement duration as well as the sample number is investigated for the same flow conditions. Measurements of both group-I (spherical/distorted) and group-II (cap/slug/churn-turbulent) bubbles are found to be relatively insensitive to both the measurement duration and the number of bubbles, as long as the measurements are made for a duration long enough to capture a collection of samples characteristic to a

  19. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...

  20. Probabilistic sensitivity analysis in health economics.

    Science.gov (United States)

    Baio, Gianluca; Dawid, A Philip

    2015-12-01

    Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. © The Author(s) 2011.

  1. TOLERANCE SENSITIVITY ANALYSIS: THIRTY YEARS LATER

    Directory of Open Access Journals (Sweden)

    Richard E. Wendell

    2010-12-01

    Full Text Available Tolerance sensitivity analysis was conceived in 1980 as a pragmatic approach to effectively characterize a parametric region over which objective function coefficients and right-hand-side terms in linear programming could vary simultaneously and independently while maintaining the same optimal basis. As originally proposed, the tolerance region corresponds to the maximum percentage by which coefficients or terms could vary from their estimated values. Over the last thirty years the original results have been extended in a number of ways and applied in a variety of applications. This paper is a critical review of tolerance sensitivity analysis, including extensions and applications.

  2. Sensitivity analysis for missing data in regulatory submissions.

    Science.gov (United States)

    Permutt, Thomas

    2016-07-30

    The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  3. Controls on inorganic nitrogen leaching from Finnish catchments assessed using a sensitivity and uncertainty analysis of the INCA-N model

    Energy Technology Data Exchange (ETDEWEB)

    Rankinen, K.; Granlund, K. [Finnish Environmental Inst., Helsinki (Finland); Futter, M. N. [Swedish Univ. of Agricultural Sciences, Uppsala (Sweden)

    2013-11-01

    The semi-distributed, dynamic INCA-N model was used to simulate the behaviour of dissolved inorganic nitrogen (DIN) in two Finnish research catchments. Parameter sensitivity and model structural uncertainty were analysed using generalized sensitivity analysis. The Mustajoki catchment is a forested upstream catchment, while the Savijoki catchment represents intensively cultivated lowlands. In general, there were more influential parameters in Savijoki than Mustajoki. Model results were sensitive to N-transformation rates, vegetation dynamics, and soil and river hydrology. Values of the sensitive parameters were based on long-term measurements covering both warm and cold years. The highest measured DIN concentrations fell between minimum and maximum values estimated during the uncertainty analysis. The lowest measured concentrations fell outside these bounds, suggesting that some retention processes may be missing from the current model structure. The lowest concentrations occurred mainly during low flow periods; so effects on total loads were small. (orig.)

  4. The diagnostic performance of CT-derived fractional flow reserve for evaluation of myocardial ischaemia confirmed by invasive fractional flow reserve: a meta-analysis

    International Nuclear Information System (INIS)

    Li, S.; Tang, X.; Peng, L.; Luo, Y.; Dong, R.; Liu, J.

    2015-01-01

    Aim: To review the literature on the diagnostic accuracy of CT-derived fractional flow reserve (FFR CT ) for the evaluation of myocardial ischaemia in patients with suspected or known coronary artery disease, with invasive fractional flow reserve (FFR) as the reference standard. Materials and methods: A PubMed, EMBASE, and Cochrane cross-search was performed. The pooled diagnostic accuracy of FFR CT , with FFR as the reference standard, was primarily analysed, and then compared with that of CT angiography (CTA). The thresholds to diagnose ischaemia were FFR ≤0.80 or CTA ≥50% stenosis. Data extraction, synthesis, and statistical analysis were performed by standard meta-analysis methods. Results: Three multicentre studies (NXT Trial, DISCOVER-FLOW study and DeFACTO study) were included, examining 609 patients and 1050 vessels. The pooled sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (LR+), negative likelihood ratio (LR−), and diagnostic odds ratio (DOR) for FFR CT were 89% (85–93%), 71% (65–75%), 70% (65–75%), 90% (85–93%), 3.31 (1.79–6.14), 0.16 (0.11–0.23), and 21.21 (9.15–49.15) at the patient-level, and 83% (78–63%), 78% (75–81%), 61% (56–65%), 92% (89–90%), 4.02 (1.84–8.80), 0.22 (0.13–0.35), and 19.15 (5.73–63.93) at the vessel-level. At per-patient analysis, FFR CT has similar sensitivity but improved specificity, PPV, NPV, LR+, LR−, and DOR versus those of CTA. At per-vessel analysis, FFR CT had a slightly lower sensitivity, similar NPV, but improved specificity, PPV, LR+, LR−, and DOR compared with those of CTA. The area under the summary receiver operating characteristic curves for FFR CT was 0.8909 at patient-level and 0.8865 at vessel-level, versus 0.7402 for CTA at patient-level. Conclusions: FFR CT , which was associated with improved diagnostic accuracy versus CTA, is a viable alternative to FFR for detecting coronary ischaemic lesions

  5. Risk and sensitivity analysis in relation to external events

    International Nuclear Information System (INIS)

    Alzbutas, R.; Urbonas, R.; Augutis, J.

    2001-01-01

    This paper presents risk and sensitivity analysis of external events impacts on the safe operation in general and in particular the Ignalina Nuclear Power Plant safety systems. Analysis is based on the deterministic and probabilistic assumptions and assessment of the external hazards. The real statistic data are used as well as initial external event simulation. The preliminary screening criteria are applied. The analysis of external event impact on the NPP safe operation, assessment of the event occurrence, sensitivity analysis, and recommendations for safety improvements are performed for investigated external hazards. Such events as aircraft crash, extreme rains and winds, forest fire and flying parts of the turbine are analysed. The models are developed and probabilities are calculated. As an example for sensitivity analysis the model of aircraft impact is presented. The sensitivity analysis takes into account the uncertainty features raised by external event and its model. Even in case when the external events analysis show rather limited danger, the sensitivity analysis can determine the highest influence causes. These possible variations in future can be significant for safety level and risk based decisions. Calculations show that external events cannot significantly influence the safety level of the Ignalina NPP operation, however the events occurrence and propagation can be sufficiently uncertain.(author)

  6. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  7. Flow Injection Analysis in Industrial Biotechnology

    DEFF Research Database (Denmark)

    Hansen, Elo Harald; Miró, Manuel

    2009-01-01

    Flow injection analysis (FIA) is an analytical chemical continuous-flow (CF) method which in contrast to traditional CF-procedures does not rely on complete physical mixing (homogenisation) of the sample and the reagent(s) or on attaining chemical equilibria of the chemical reactions involved. Ex...

  8. Sensitivity analysis and related analysis : A survey of statistical techniques

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical

  9. Computational Analysis of Human Blood Flow

    Science.gov (United States)

    Panta, Yogendra; Marie, Hazel; Harvey, Mark

    2009-11-01

    Fluid flow modeling with commercially available computational fluid dynamics (CFD) software is widely used to visualize and predict physical phenomena related to various biological systems. In this presentation, a typical human aorta model was analyzed assuming the blood flow as laminar with complaint cardiac muscle wall boundaries. FLUENT, a commercially available finite volume software, coupled with Solidworks, a modeling software, was employed for the preprocessing, simulation and postprocessing of all the models.The analysis mainly consists of a fluid-dynamics analysis including a calculation of the velocity field and pressure distribution in the blood and a mechanical analysis of the deformation of the tissue and artery in terms of wall shear stress. A number of other models e.g. T branches, angle shaped were previously analyzed and compared their results for consistency for similar boundary conditions. The velocities, pressures and wall shear stress distributions achieved in all models were as expected given the similar boundary conditions. The three dimensional time dependent analysis of blood flow accounting the effect of body forces with a complaint boundary was also performed.

  10. Silver Nanoparticle-Based Fluorescence-Quenching Lateral Flow Immunoassay for Sensitive Detection of Ochratoxin A in Grape Juice and Wine

    Science.gov (United States)

    Jiang, Hu; Li, Xiangmin; Xiong, Ying; Pei, Ke; Nie, Lijuan; Xiong, Yonghua

    2017-01-01

    A silver nanoparticle (AgNP)-based fluorescence-quenching lateral flow immunoassay with competitive format (cLFIA) was developed for sensitive detection of ochratoxin A (OTA) in grape juice and wine samples in the present study. The Ru(phen)32+-doped silica nanoparticles (RuNPs) were sprayed on the test and control line zones as background fluorescence signals. The AgNPs were designed as the fluorescence quenchers of RuNPs because they can block the exciting light transferring to the RuNP molecules. The proposed method exhibited high sensitivity for OTA detection, with a detection limit of 0.06 µg/L under optimized conditions. The method also exhibited a good linear range for OTA quantitative analysis from 0.08 µg/L to 5.0 µg/L. The reliability of the fluorescence-quenching cLFIA method was evaluated through analysis of the OTA-spiked red grape wine and juice samples. The average recoveries ranged from 88.0% to 110.0% in red grape wine and from 92.0% to 110.0% in grape juice. Meanwhile, less than a 10% coefficient variation indicated an acceptable precision of the cLFIA method. In summary, the new AgNP-based fluorescence-quenching cLFIA is a simple, rapid, sensitive, and accurate method for quantitative detection of OTA in grape juice and wine or other foodstuffs. PMID:28264472

  11. Silver Nanoparticle-Based Fluorescence-Quenching Lateral Flow Immunoassay for Sensitive Detection of Ochratoxin A in Grape Juice and Wine

    Directory of Open Access Journals (Sweden)

    Hu Jiang

    2017-02-01

    Full Text Available A silver nanoparticle (AgNP-based fluorescence-quenching lateral flow immunoassay with competitive format (cLFIA was developed for sensitive detection of ochratoxin A (OTA in grape juice and wine samples in the present study. The Ru(phen 3 2 + -doped silica nanoparticles (RuNPs were sprayed on the test and control line zones as background fluorescence signals. The AgNPs were designed as the fluorescence quenchers of RuNPs because they can block the exciting light transferring to the RuNP molecules. The proposed method exhibited high sensitivity for OTA detection, with a detection limit of 0.06 µg/L under optimized conditions. The method also exhibited a good linear range for OTA quantitative analysis from 0.08 µg/L to 5.0 µg/L. The reliability of the fluorescence-quenching cLFIA method was evaluated through analysis of the OTA-spiked red grape wine and juice samples. The average recoveries ranged from 88.0% to 110.0% in red grape wine and from 92.0% to 110.0% in grape juice. Meanwhile, less than a 10% coefficient variation indicated an acceptable precision of the cLFIA method. In summary, the new AgNP-based fluorescence-quenching cLFIA is a simple, rapid, sensitive, and accurate method for quantitative detection of OTA in grape juice and wine or other foodstuffs.

  12. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the

  13. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Directory of Open Access Journals (Sweden)

    Georgios Arampatzis

    Full Text Available Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of

  14. MODELING AND ANALYSIS OF UNSTEADY FLOW BEHAVIOR IN DEEPWATER CONTROLLED MUD-CAP DRILLING

    Directory of Open Access Journals (Sweden)

    Jiwei Li

    Full Text Available Abstract A new mathematical model was developed in this study to simulate the unsteady flow in controlled mud-cap drilling systems. The model can predict the time-dependent flow inside the drill string and annulus after a circulation break. This model consists of the continuity and momentum equations solved using the explicit Euler method. The model considers both Newtonian and non-Newtonian fluids flowing inside the drill string and annular space. The model predicts the transient flow velocity of mud, the equilibrium time, and the change in the bottom hole pressure (BHP during the unsteady flow. The model was verified using data from U-tube flow experiments reported in the literature. The result shows that the model is accurate, with a maximum average error of 3.56% for the velocity prediction. Together with the measured data, the computed transient flow behavior can be used to better detect well kick and a loss of circulation after the mud pump is shut down. The model sensitivity analysis show that the water depth, mud density and drill string size are the three major factors affecting the fluctuation of the BHP after a circulation break. These factors should be carefully examined in well design and drilling operations to minimize BHP fluctuation and well kick. This study provides the fundamentals for designing a safe system in controlled mud-cap drilling operati.

  15. Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae [NESS, Daejeon (Korea, Republic of)

    2016-10-15

    Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed.

  16. Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes

    International Nuclear Information System (INIS)

    Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae

    2016-01-01

    Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed

  17. The role of sensitivity analysis in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Hirschberg, S.; Knochenhauer, M.

    1987-01-01

    The paper describes several items suitable for close examination by means of application of sensitivity analysis, when performing a level 1 PSA. Sensitivity analyses are performed with respect to; (1) boundary conditions, (2) operator actions, and (3) treatment of common cause failures (CCFs). The items of main interest are identified continuously in the course of performing a PSA, as well as by scrutinising the final results. The practical aspects of sensitivity analysis are illustrated by several applications from a recent PSA study (ASEA-ATOM BWR 75). It is concluded that sensitivity analysis leads to insights important for analysts, reviewers and decision makers. (orig./HP)

  18. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    Science.gov (United States)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  19. Sensitivity analysis of Takagi-Sugeno-Kang rainfall-runoff fuzzy models

    Directory of Open Access Journals (Sweden)

    A. P. Jacquin

    2009-01-01

    Full Text Available This paper is concerned with the sensitivity analysis of the model parameters of the Takagi-Sugeno-Kang fuzzy rainfall-runoff models previously developed by the authors. These models are classified in two types of fuzzy models, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis and Sobol's variance decomposition. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of several measures of goodness of fit, assessing the model performance from different points of view. These measures include the Nash-Sutcliffe criteria, volumetric errors and peak errors. The results show that the sensitivity of the model parameters depends on both the catchment type and the measure used to assess the model performance.

  20. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  1. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  2. Analysis of mixed traffic flow with human-driving and autonomous cars based on car-following model

    Science.gov (United States)

    Zhu, Wen-Xing; Zhang, H. M.

    2018-04-01

    We investigated the mixed traffic flow with human-driving and autonomous cars. A new mathematical model with adjustable sensitivity and smooth factor was proposed to describe the autonomous car's moving behavior in which smooth factor is used to balance the front and back headway in a flow. A lemma and a theorem were proved to support the stability criteria in traffic flow. A series of simulations were carried out to analyze the mixed traffic flow. The fundamental diagrams were obtained from the numerical simulation results. The varying sensitivity and smooth factor of autonomous cars affect traffic flux, which exhibits opposite varying tendency with increasing parameters before and after the critical density. Moreover, the sensitivity of sensors and smooth factors play an important role in stabilizing the mixed traffic flow and suppressing the traffic jam.

  3. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  4. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    Science.gov (United States)

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  5. Development of an automated flow injection analysis system for determination of phosphate in nutrient solutions.

    Science.gov (United States)

    Karadağ, Sevinç; Görüşük, Emine M; Çetinkaya, Ebru; Deveci, Seda; Dönmez, Koray B; Uncuoğlu, Emre; Doğu, Mustafa

    2018-01-25

    A fully automated flow injection analysis (FIA) system was developed for determination of phosphate ion in nutrient solutions. This newly developed FIA system is a portable, rapid and sensitive measuring instrument that allows on-line analysis and monitoring of phosphate ion concentration in nutrient solutions. The molybdenum blue method, which is widely used in FIA phosphate analysis, was adapted to the developed FIA system. The method is based on the formation of ammonium Mo(VI) ion by reaction of ammonium molybdate with the phosphate ion present in the medium. The Mo(VI) ion then reacts with ascorbic acid and is reduced to the spectrometrically measurable Mo(V) ion. New software specific for flow analysis was developed in the LabVIEW development environment to control all the components of the FIA system. The important factors affecting the analytical signal were identified as reagent flow rate, injection volume and post-injection flow path length, and they were optimized using Box-Behnken experimental design and response surface methodology. The optimum point for the maximum analytical signal was calculated as 0.50 mL min -1 reagent flow rate, 100 µL sample injection volume and 60 cm post-injection flow path length. The proposed FIA system had a sampling frequency of 100 samples per hour over a linear working range of 3-100 mg L -1 (R 2  = 0.9995). The relative standard deviation (RSD) was 1.09% and the limit of detection (LOD) was 0.34 mg L -1 . Various nutrient solutions from a tomato-growing hydroponic greenhouse were analyzed with the developed FIA system and the results were found to be in good agreement with vanadomolybdate chemical method findings. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.

  6. Sensitivity analysis of the RESRAD, a dose assessment code

    International Nuclear Information System (INIS)

    Yu, C.; Cheng, J.J.; Zielen, A.J.

    1991-01-01

    The RESRAD code is a pathway analysis code that is designed to calculate radiation doses and derive soil cleanup criteria for the US Department of Energy's environmental restoration and waste management program. the RESRAD code uses various pathway and consumption-rate parameters such as soil properties and food ingestion rates in performing such calculations and derivations. As with any predictive model, the accuracy of the predictions depends on the accuracy of the input parameters. This paper summarizes the results of a sensitivity analysis of RESRAD input parameters. Three methods were used to perform the sensitivity analysis: (1) Gradient Enhanced Software System (GRESS) sensitivity analysis software package developed at oak Ridge National Laboratory; (2) direct perturbation of input parameters; and (3) built-in graphic package that shows parameter sensitivities while the RESRAD code is operational

  7. Basic Functional Analysis Puzzles of Spectral Flow

    DEFF Research Database (Denmark)

    Booss-Bavnbek, Bernhelm

    2011-01-01

    We explain an array of basic functional analysis puzzles on the way to general spectral flow formulae and indicate a direction of future topological research for dealing with these puzzles.......We explain an array of basic functional analysis puzzles on the way to general spectral flow formulae and indicate a direction of future topological research for dealing with these puzzles....

  8. Abnormal traffic flow data detection based on wavelet analysis

    Directory of Open Access Journals (Sweden)

    Xiao Qian

    2016-01-01

    Full Text Available In view of the traffic flow data of non-stationary, the abnormal data detection is difficult.proposed basing on the wavelet analysis and least squares method of abnormal traffic flow data detection in this paper.First using wavelet analysis to make the traffic flow data of high frequency and low frequency component and separation, and then, combined with least square method to find abnormal points in the reconstructed signal data.Wavelet analysis and least square method, the simulation results show that using wavelet analysis of abnormal traffic flow data detection, effectively reduce the detection results of misjudgment rate and false negative rate.

  9. Sensitivity analysis in a structural reliability context

    International Nuclear Information System (INIS)

    Lemaitre, Paul

    2014-01-01

    This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored. The proposed methods are then applied on the CWNR case, which motivates this thesis. (author)

  10. Sensitivity analysis using probability bounding

    International Nuclear Information System (INIS)

    Ferson, Scott; Troy Tucker, W.

    2006-01-01

    Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values

  11. LDV measurement, flow visualization and numerical analysis of flow distribution in a close-coupled catalytic converter

    International Nuclear Information System (INIS)

    Kim, Duk Sang; Cho, Yong Seok

    2004-01-01

    Results from an experimental study of flow distribution in a Close-coupled Catalytic Converter (CCC) are presented. The experiments were carried out with a flow measurement system specially designed for this study under steady and transient flow conditions. A pitot tube was a tool for measuring flow distribution at the exit of the first monolith. The flow distribution of the CCC was also measured by LDV system and flow visualization. Results from numerical analysis are also presented. Experimental results showed that the flow uniformity index decreases as flow Reynolds number increases. In steady flow conditions, the flow through each exhaust pipe made some flow concentrations on a specific region of the CCC inlet. The transient test results showed that the flow through each exhaust pipe in the engine firing order, interacted with each other to ensure that the flow distribution was uniform. The results of numerical analysis were qualitatively accepted with experimental results. They supported and helped explain the flow in the entry region of CCC

  12. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  13. Flow Analysis for the Falkner–Skan Wedge Flow

    DEFF Research Database (Denmark)

    Bararnia, H; Haghparast, N; Miansari, M

    2012-01-01

    In this article an analytical technique, namely the homotopy analysis method (HAM), is applied to solve the momentum and energy equations in the case of a two-dimensional incompressible flow passing over a wedge. The trail and error method and Padé approximation strategies have been used to obtai...

  14. Space shuttle booster multi-engine base flow analysis

    Science.gov (United States)

    Tang, H. H.; Gardiner, C. R.; Anderson, W. A.; Navickas, J.

    1972-01-01

    A comprehensive review of currently available techniques pertinent to several prominent aspects of the base thermal problem of the space shuttle booster is given along with a brief review of experimental results. A tractable engineering analysis, capable of predicting the power-on base pressure, base heating, and other base thermal environmental conditions, such as base gas temperature, is presented and used for an analysis of various space shuttle booster configurations. The analysis consists of a rational combination of theoretical treatments of the prominent flow interaction phenomena in the base region. These theories consider jet mixing, plume flow, axisymmetric flow effects, base injection, recirculating flow dynamics, and various modes of heat transfer. Such effects as initial boundary layer expansion at the nozzle lip, reattachment, recompression, choked vent flow, and nonisoenergetic mixing processes are included in the analysis. A unified method was developed and programmed to numerically obtain compatible solutions for the various flow field components in both flight and ground test conditions. Preliminary prediction for a 12-engine space shuttle booster base thermal environment was obtained for a typical trajectory history. Theoretical predictions were also obtained for some clustered-engine experimental conditions. Results indicate good agreement between the data and theoretical predicitons.

  15. Present status of numerical analysis on transient two-phase flow

    International Nuclear Information System (INIS)

    Akimoto, Masayuki; Hirano, Masashi; Nariai, Hideki.

    1987-01-01

    The Special Committee for Numerical Analysis of Thermal Flow has recently been established under the Japan Atomic Energy Association. Here, some methods currently used for numerical analysis of transient two-phase flow are described citing some information given in the first report of the above-mentioned committee. Many analytical models for transient two-phase flow have been proposed, each of which is designed to describe a flow by using differential equations associated with conservation of mass, momentum and energy in a continuous two-phase flow system together with constructive equations that represent transportation of mass, momentum and energy though a gas-liquid interface or between a liquid flow and the channel wall. The author has developed an analysis code, called MINCS, that serves for systematic examination of conservation equation and constructive equations for two-phase flow models. A one-dimensional, non-equilibrium two-liquid flow model that is used as the basic model for the code is described. Actual procedures for numerical analysis is shown and some problems concerning transient two-phase analysis are described. (Nogami, K.)

  16. Fast Virtual Fractional Flow Reserve Based Upon Steady-State Computational Fluid Dynamics Analysis: Results From the VIRTU-Fast Study.

    Science.gov (United States)

    Morris, Paul D; Silva Soto, Daniel Alejandro; Feher, Jeroen F A; Rafiroiu, Dan; Lungu, Angela; Varma, Susheel; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2017-08-01

    Fractional flow reserve (FFR)-guided percutaneous intervention is superior to standard assessment but remains underused. The authors have developed a novel "pseudotransient" analysis protocol for computing virtual fractional flow reserve (vFFR) based upon angiographic images and steady-state computational fluid dynamics. This protocol generates vFFR results in 189 s (cf >24 h for transient analysis) using a desktop PC, with <1% error relative to that of full-transient computational fluid dynamics analysis. Sensitivity analysis demonstrated that physiological lesion significance was influenced less by coronary or lesion anatomy (33%) and more by microvascular physiology (59%). If coronary microvascular resistance can be estimated, vFFR can be accurately computed in less time than it takes to make invasive measurements.

  17. Sensitivity analysis of ranked data: from order statistics to quantiles

    NARCIS (Netherlands)

    Heidergott, B.F.; Volk-Makarewicz, W.

    2015-01-01

    In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before

  18. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    International Nuclear Information System (INIS)

    Sig Drellack, Lance Prothro

    2007-01-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  19. Sensitivity analysis in remote sensing

    CERN Document Server

    Ustinov, Eugene A

    2015-01-01

    This book contains a detailed presentation of general principles of sensitivity analysis as well as their applications to sample cases of remote sensing experiments. An emphasis is made on applications of adjoint problems, because they are more efficient in many practical cases, although their formulation may seem counterintuitive to a beginner. Special attention is paid to forward problems based on higher-order partial differential equations, where a novel matrix operator approach to formulation of corresponding adjoint problems is presented. Sensitivity analysis (SA) serves for quantitative models of physical objects the same purpose, as differential calculus does for functions. SA provides derivatives of model output parameters (observables) with respect to input parameters. In remote sensing SA provides computer-efficient means to compute the jacobians, matrices of partial derivatives of observables with respect to the geophysical parameters of interest. The jacobians are used to solve corresponding inver...

  20. Numerical 3D flow simulation of attached cavitation structures at ultrasonic horn tips and statistical evaluation of flow aggressiveness via load collectives

    Science.gov (United States)

    Mottyll, S.; Skoda, R.

    2015-12-01

    A compressible inviscid flow solver with barotropic cavitation model is applied to two different ultrasonic horn set-ups and compared to hydrophone, shadowgraphy as well as erosion test data. The statistical analysis of single collapse events in wall-adjacent flow regions allows the determination of the flow aggressiveness via load collectives (cumulative event rate vs collapse pressure), which show an exponential decrease in agreement to studies on hydrodynamic cavitation [1]. A post-processing projection of event rate and collapse pressure on a reference grid reduces the grid dependency significantly. In order to evaluate the erosion-sensitive areas a statistical analysis of transient wall loads is utilised. Predicted erosion sensitive areas as well as temporal pressure and vapour volume evolution are in good agreement to the experimental data.

  1. The use of a polymer inclusion membrane for separation and preconcentration of orthophosphate in flow analysis

    International Nuclear Information System (INIS)

    Nagul, Edward A.; Fontàs, Clàudia; McKelvie, Ian D.; Cattrall, Robert W.; Kolev, Spas D.

    2013-01-01

    Graphical abstract: -- Highlights: •A flow analysis system determines phosphate at trace levels as molybdenum blue. •The flow system can operate under flow injection or continuous flow conditions. •On-line membrane-based separation and preconcentration is applied. •A polymer inclusion membrane composed of 70 wt% PVC and 30 wt% Aliquat 336 is used. •The flow system was successfully applied to a number of pristine water samples. -- Abstract: A highly sensitive flow analysis system has been developed for the trace determination of reactive phosphate in natural waters, which uses a polymer inclusion membrane (PIM) with Aliquat 336 as the carrier for on-line analyte separation and preconcentration. The system operates under flow injection (FI) and continuous flow (CF) conditions. Under optimal FI conditions the system is characterised by a linear concentration range between 0.5 and 1000 μg L −1 P, a sampling rate of 10 h −1 , a limit of detection of 0.5 μg L −1 P and RSDs of 3.2% (n = 10, 100 μg L −1 ) and 7.7% (n = 10, 10 μg L −1 ). Under CF conditions with 10 min stop-flow time and sample solution flow rate of 1.32 mL min −1 the flow system offers a limit of detection of 0.04 μg L −1 P, a sampling rate of 5 h −1 and an RSD of 3.4% (n = 5, 2.0 μg L −1 ). Interference studies revealed that anions commonly found in natural waters did not interfere when in excess of at least one order of magnitude. The flow system, operating under CF conditions, was successfully applied to the analysis of natural water samples containing concentrations of phosphate in the low μg L −1 P range, using the multipoint standard addition method

  2. The use of a polymer inclusion membrane for separation and preconcentration of orthophosphate in flow analysis

    Energy Technology Data Exchange (ETDEWEB)

    Nagul, Edward A. [School of Chemistry, The University of Melbourne, Victoria 3010 (Australia); Centre for Aquatic Pollution Identification and Management (CAPIM), The University of Melbourne, Victoria 3010 (Australia); Fontàs, Clàudia [Department of Chemistry, University of Girona, Campus Montilivi, 17071 Girona (Spain); McKelvie, Ian D. [School of Chemistry, The University of Melbourne, Victoria 3010 (Australia); School of Geography, Earth and Environmental Sciences, Plymouth University, Plymouth PL48AA (United Kingdom); Cattrall, Robert W. [School of Chemistry, The University of Melbourne, Victoria 3010 (Australia); Kolev, Spas D., E-mail: s.kolev@unimelb.edu.au [School of Chemistry, The University of Melbourne, Victoria 3010 (Australia); Centre for Aquatic Pollution Identification and Management (CAPIM), The University of Melbourne, Victoria 3010 (Australia)

    2013-11-25

    Graphical abstract: -- Highlights: •A flow analysis system determines phosphate at trace levels as molybdenum blue. •The flow system can operate under flow injection or continuous flow conditions. •On-line membrane-based separation and preconcentration is applied. •A polymer inclusion membrane composed of 70 wt% PVC and 30 wt% Aliquat 336 is used. •The flow system was successfully applied to a number of pristine water samples. -- Abstract: A highly sensitive flow analysis system has been developed for the trace determination of reactive phosphate in natural waters, which uses a polymer inclusion membrane (PIM) with Aliquat 336 as the carrier for on-line analyte separation and preconcentration. The system operates under flow injection (FI) and continuous flow (CF) conditions. Under optimal FI conditions the system is characterised by a linear concentration range between 0.5 and 1000 μg L{sup −1} P, a sampling rate of 10 h{sup −1}, a limit of detection of 0.5 μg L{sup −1} P and RSDs of 3.2% (n = 10, 100 μg L{sup −1}) and 7.7% (n = 10, 10 μg L{sup −1}). Under CF conditions with 10 min stop-flow time and sample solution flow rate of 1.32 mL min{sup −1} the flow system offers a limit of detection of 0.04 μg L{sup −1} P, a sampling rate of 5 h{sup −1} and an RSD of 3.4% (n = 5, 2.0 μg L{sup −1}). Interference studies revealed that anions commonly found in natural waters did not interfere when in excess of at least one order of magnitude. The flow system, operating under CF conditions, was successfully applied to the analysis of natural water samples containing concentrations of phosphate in the low μg L{sup −1} P range, using the multipoint standard addition method.

  3. Retrospective Analysis of T and B Cells Flow-Cross Matches in Renal Transplant Recipients

    Directory of Open Access Journals (Sweden)

    Lakshmi Kiran C

    2008-01-01

    Full Text Available Complement-mediated cytotoxic antibodies in conventional cross match, often result in misappropriation of true positives and borderline positives which are detrimental to allograft survival. Flow cross matches (FCXM are sensitive to capture even non comple-ment fixing cytotoxic antibodies. This retrospective study evaluates the utility of FCXM in effectively predicting acute allograft rejection. A total of 17 cases were processed for FCXM (T and B cell of whom seven had no rejection episodes, while the remaining 11 had acute rejection despite negative cross match and panel reacting antibodies being ne-gative (less than 20%. The sensitivity and specificity of the FCXM outcome demons-trated that positive B-cell FCXM has potential to be a good tool in pre-transplant scree-ning. The current analysis proposes the possible utility of B-cell positive FCXM as a more sensitive parameter in predicting acute allograft rejection prior to transplantation.

  4. Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2014-11-01

    Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.

  5. A general first-order global sensitivity analysis method

    International Nuclear Information System (INIS)

    Xu Chonggang; Gertner, George Zdzislaw

    2008-01-01

    Fourier amplitude sensitivity test (FAST) is one of the most popular global sensitivity analysis techniques. The main mechanism of FAST is to assign each parameter with a characteristic frequency through a search function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency. Although FAST has been widely applied, there are two limitations: (1) the aliasing effect among parameters by using integer characteristic frequencies and (2) the suitability for only models with independent parameters. In this paper, we synthesize the improvement to overcome the aliasing effect limitation [Tarantola S, Gatelli D, Mara TA. Random balance designs for the estimation of first order global sensitivity indices. Reliab Eng Syst Safety 2006; 91(6):717-27] and the improvement to overcome the independence limitation [Xu C, Gertner G. Extending a global sensitivity analysis technique to models with correlated parameters. Comput Stat Data Anal 2007, accepted for publication]. In this way, FAST can be a general first-order global sensitivity analysis method for linear/nonlinear models with as many correlated/uncorrelated parameters as the user specifies. We apply the general FAST to four test cases with correlated parameters. The results show that the sensitivity indices derived by the general FAST are in good agreement with the sensitivity indices derived by the correlation ratio method, which is a non-parametric method for models with correlated parameters

  6. Climate Informed Low Flow Frequency Analysis Using Nonstationary Modeling

    Science.gov (United States)

    Liu, D.; Guo, S.; Lian, Y.

    2014-12-01

    Stationarity is often assumed for frequency analysis of low flows in water resources management and planning. However, many studies have shown that flow characteristics, particularly the frequency spectrum of extreme hydrologic events,were modified by climate change and human activities and the conventional frequency analysis without considering the non-stationary characteristics may lead to costly design. The analysis presented in this paper was based on the more than 100 years of daily flow data from the Yichang gaging station 44 kilometers downstream of the Three Gorges Dam. The Mann-Kendall trend test under the scaling hypothesis showed that the annual low flows had significant monotonic trend, whereas an abrupt change point was identified in 1936 by the Pettitt test. The climate informed low flow frequency analysis and the divided and combined method are employed to account for the impacts from related climate variables and the nonstationarities in annual low flows. Without prior knowledge of the probability density function for the gaging station, six distribution functions including the Generalized Extreme Values (GEV), Pearson Type III, Gumbel, Gamma, Lognormal, and Weibull distributions have been tested to find the best fit, in which the local likelihood method is used to estimate the parameters. Analyses show that GEV had the best fit for the observed low flows. This study has also shown that the climate informed low flow frequency analysis is able to exploit the link between climate indices and low flows, which would account for the dynamic feature for reservoir management and provide more accurate and reliable designs for infrastructure and water supply.

  7. Time-dependent reliability sensitivity analysis of motion mechanisms

    International Nuclear Information System (INIS)

    Wei, Pengfei; Song, Jingwen; Lu, Zhenzhou; Yue, Zhufeng

    2016-01-01

    Reliability sensitivity analysis aims at identifying the source of structure/mechanism failure, and quantifying the effects of each random source or their distribution parameters on failure probability or reliability. In this paper, the time-dependent parametric reliability sensitivity (PRS) analysis as well as the global reliability sensitivity (GRS) analysis is introduced for the motion mechanisms. The PRS indices are defined as the partial derivatives of the time-dependent reliability w.r.t. the distribution parameters of each random input variable, and they quantify the effect of the small change of each distribution parameter on the time-dependent reliability. The GRS indices are defined for quantifying the individual, interaction and total contributions of the uncertainty in each random input variable to the time-dependent reliability. The envelope function method combined with the first order approximation of the motion error function is introduced for efficiently estimating the time-dependent PRS and GRS indices. Both the time-dependent PRS and GRS analysis techniques can be especially useful for reliability-based design. This significance of the proposed methods as well as the effectiveness of the envelope function method for estimating the time-dependent PRS and GRS indices are demonstrated with a four-bar mechanism and a car rack-and-pinion steering linkage. - Highlights: • Time-dependent parametric reliability sensitivity analysis is presented. • Time-dependent global reliability sensitivity analysis is presented for mechanisms. • The proposed method is especially useful for enhancing the kinematic reliability. • An envelope method is introduced for efficiently implementing the proposed methods. • The proposed method is demonstrated by two real planar mechanisms.

  8. On conditions and parameters important to model sensitivity for unsaturated flow through layered, fractured tuff

    International Nuclear Information System (INIS)

    Prindle, R.W.; Hopkins, P.L.

    1990-10-01

    The Hydrologic Code Intercomparison Project (HYDROCOIN) was formed to evaluate hydrogeologic models and computer codes and their use in performance assessment for high-level radioactive-waste repositories. This report describes the results of a study for HYDROCOIN of model sensitivity for isothermal, unsaturated flow through layered, fractured tuffs. We investigated both the types of flow behavior that dominate the performance measures and the conditions and model parameters that control flow behavior. We also examined the effect of different conceptual models and modeling approaches on our understanding of system behavior. The analyses included single- and multiple-parameter variations about base cases in one-dimensional steady and transient flow and in two-dimensional steady flow. The flow behavior is complex even for the highly simplified and constrained system modeled here. The response of the performance measures is both nonlinear and nonmonotonic. System behavior is dominated by abrupt transitions from matrix to fracture flow and by lateral diversion of flow. The observed behaviors are strongly influenced by the imposed boundary conditions and model constraints. Applied flux plays a critical role in determining the flow type but interacts strongly with the composite-conductivity curves of individual hydrologic units and with the stratigraphy. One-dimensional modeling yields conservative estimates of distributions of groundwater travel time only under very limited conditions. This study demonstrates that it is wrong to equate the shortest possible water-travel path with the fastest path from the repository to the water table. 20 refs., 234 figs., 10 tabs

  9. Numerical flow analysis of axial flow compressor for steady and unsteady flow cases

    Science.gov (United States)

    Prabhudev, B. M.; Satish kumar, S.; Rajanna, D.

    2017-07-01

    Performance of jet engine is dependent on the performance of compressor. This paper gives numerical study of performance characteristics for axial compressor. The test rig is present at CSIR LAB Bangalore. Flow domains are meshed and fluid dynamic equations are solved using ANSYS package. Analysis is done for six different speeds and for operating conditions like choke, maximum efficiency & before stall point. Different plots are compared and results are discussed. Shock displacement, vortex flows, leakage patterns are presented along with unsteady FFT plot and time step plot.

  10. Technical advances in flow cytometry-based diagnosis and monitoring of paroxysmal nocturnal hemoglobinuria

    Science.gov (United States)

    Correia, Rodolfo Patussi; Bento, Laiz Cameirão; Bortolucci, Ana Carolina Apelle; Alexandre, Anderson Marega; Vaz, Andressa da Costa; Schimidell, Daniela; Pedro, Eduardo de Carvalho; Perin, Fabricio Simões; Nozawa, Sonia Tsukasa; Mendes, Cláudio Ernesto Albers; Barroso, Rodrigo de Souza; Bacal, Nydia Strachman

    2016-01-01

    ABSTRACT Objective: To discuss the implementation of technical advances in laboratory diagnosis and monitoring of paroxysmal nocturnal hemoglobinuria for validation of high-sensitivity flow cytometry protocols. Methods: A retrospective study based on analysis of laboratory data from 745 patient samples submitted to flow cytometry for diagnosis and/or monitoring of paroxysmal nocturnal hemoglobinuria. Results: Implementation of technical advances reduced test costs and improved flow cytometry resolution for paroxysmal nocturnal hemoglobinuria clone detection. Conclusion: High-sensitivity flow cytometry allowed more sensitive determination of paroxysmal nocturnal hemoglobinuria clone type and size, particularly in samples with small clones. PMID:27759825

  11. Techno-Economic Modeling and Analysis of Redox Flow Battery Systems

    Directory of Open Access Journals (Sweden)

    Jens Noack

    2016-08-01

    Full Text Available A techno-economic model was developed to investigate the influence of components on the system costs of redox flow batteries. Sensitivity analyses were carried out based on an example of a 10 kW/120 kWh vanadium redox flow battery system, and the costs of the individual components were analyzed. Particular consideration was given to the influence of the material costs and resistances of bipolar plates and energy storage media as well as voltages and electric currents. Based on the developed model, it was possible to formulate statements about the targeted optimization of a developed non-commercial vanadium redox flow battery system and general aspects for future developments of redox flow batteries.

  12. Is There a Difference in Credit Constraints Between Private and Listed Companies in Brazil? Empirical Evidence by The Cash Flow Sensitivity Approach

    Directory of Open Access Journals (Sweden)

    Alan Nader Ackel Ghani

    2015-04-01

    Full Text Available This article analyzes the credit constraints, using the cash flow sensitivity approach, of private and listed companies between 2007 and 2010. According to this approach, the econometric results show that the credit constraints are the same for either private or listed companies. This paper seeks to contribute to the literature because the study of credit constraints of private companies based on cash flow sensitivity in Brazil has been rare.

  13. Cutting-edge analysis of extracellular microparticles using ImageStream(X) imaging flow cytometry.

    Science.gov (United States)

    Headland, Sarah E; Jones, Hefin R; D'Sa, Adelina S V; Perretti, Mauro; Norling, Lucy V

    2014-06-10

    Interest in extracellular vesicle biology has exploded in the past decade, since these microstructures seem endowed with multiple roles, from blood coagulation to inter-cellular communication in pathophysiology. In order for microparticle research to evolve as a preclinical and clinical tool, accurate quantification of microparticle levels is a fundamental requirement, but their size and the complexity of sample fluids present major technical challenges. Flow cytometry is commonly used, but suffers from low sensitivity and accuracy. Use of Amnis ImageStream(X) Mk II imaging flow cytometer afforded accurate analysis of calibration beads ranging from 1 μm to 20 nm; and microparticles, which could be observed and quantified in whole blood, platelet-rich and platelet-free plasma and in leukocyte supernatants. Another advantage was the minimal sample preparation and volume required. Use of this high throughput analyzer allowed simultaneous phenotypic definition of the parent cells and offspring microparticles along with real time microparticle generation kinetics. With the current paucity of reliable techniques for the analysis of microparticles, we propose that the ImageStream(X) could be used effectively to advance this scientific field.

  14. Numerical analysis of hypersonic turbulent film cooling flows

    Science.gov (United States)

    Chen, Y. S.; Chen, C. P.; Wei, H.

    1992-01-01

    As a building block, numerical capabilities for predicting heat flux and turbulent flowfields of hypersonic vehicles require extensive model validations. Computational procedures for calculating turbulent flows and heat fluxes for supersonic film cooling with parallel slot injections are described in this study. Two injectant mass flow rates with matched and unmatched pressure conditions using the database of Holden et al. (1990) are considered. To avoid uncertainties associated with the boundary conditions in testing turbulence models, detailed three-dimensional flowfields of the injection nozzle were calculated. Two computational fluid dynamics codes, GASP and FDNS, with the algebraic Baldwin-Lomax and k-epsilon models with compressibility corrections were used. It was found that the B-L model which resolves near-wall viscous sublayer is very sensitive to the inlet boundary conditions at the nozzle exit face. The k-epsilon models with improved wall functions are less sensitive to the inlet boundary conditions. The testings show that compressibility corrections are necessary for the k-epsilon model to realistically predict the heat fluxes of the hypersonic film cooling problems.

  15. Meanline Analysis of Turbines with Choked Flow in the Object-Oriented Turbomachinery Analysis Code

    Science.gov (United States)

    Hendricks, Eric S.

    2016-01-01

    The Object-Oriented Turbomachinery Analysis Code (OTAC) is a new meanline/streamline turbomachinery modeling tool being developed at NASA GRC. During the development process, a limitation of the code was discovered in relation to the analysis of choked flow in axial turbines. This paper describes the relevant physics for choked flow as well as the changes made to OTAC to enable analysis in this flow regime.

  16. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    Energy Technology Data Exchange (ETDEWEB)

    Ekstroem, P.A.; Broed, R. [Facilia AB, Stockholm, (Sweden)

    2006-05-15

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several

  17. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    International Nuclear Information System (INIS)

    Ekstroem, P.A.; Broed, R.

    2006-05-01

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several linked

  18. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Directory of Open Access Journals (Sweden)

    W. Castaings

    2009-04-01

    Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.

    In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.

    It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.

    For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.

    Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  19. Survey of sampling-based methods for uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.

    2006-01-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition

  20. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  1. Systemization of burnup sensitivity analysis code

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2004-02-01

    To practical use of fact reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoints of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor core 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, development of a analysis code for burnup sensitivity, SAGEP-BURN, has been done and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to user due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functionalities in the existing large system. It is not sufficient to unify each computational component for some reasons; computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For this

  2. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Laxemar

    International Nuclear Information System (INIS)

    Aneljung, Maria; Sassner, Mona; Gustafsson, Lars-Goeran

    2007-11-01

    between measured and calculated surface water discharges, but the model generally underestimates the total runoff from the area. The model also overestimates the groundwater levels, and the modelled groundwater level amplitudes are too small in many boreholes. A number of likely or potential reasons for these deviations can be identified: The surface stream network description in the model is incomplete. This implies that too little overland water is drained from the area by the streams, which creates ponded areas in the model that do not exist in reality. These areas are characterized by large evaporation and infiltration, contributing to groundwater recharge and reducing transpiration from the groundwater table, in turn creating high and relatively stable groundwater levels compared to those measured at the site. In order to improve the agreement between measured and modelled surface water discharges, the evapotranspiration was reduced in the model; in effect, this implied a reduction of the potential evapotranspiration. This probably caused a larger groundwater recharge and less transpiration during summer, thereby reducing the variations in the modelled groundwater levels. If the MIKE 11 stream network is updated, the potential evapotranspiration could be increased again, such that the modelling of groundwater dynamics is improved. The bottom boundary condition and the hydraulic conductivity of the bedrock may have a large effect on model-calculated near-surface/surface water flows in Laxemar. A sensitivity analysis shows that lowering the hydraulic head at the bottom boundary (located at 150 metres below sea level) lowers the groundwater levels in the Quaternary deposits, but also implies smaller surface water discharges. Lowering the hydraulic conductivity of the bedrock would increase groundwater flows to Quaternary deposits in groundwater discharge areas, which raises groundwater levels and reduces fluctuation amplitudes. An alternative model approach, using a

  3. Global sensitivity analysis by polynomial dimensional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2011-07-15

    This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.

  4. Cryogenic recovery analysis of forced flow supercritical helium cooled superconductors

    International Nuclear Information System (INIS)

    Lee, A.Y.

    1977-08-01

    A coupled heat conduction and fluid flow method of solution was presented for cryogenic stability analysis of cabled composite superconductors of large scale magnetic coils. The coils are cooled by forced flow supercritical helium in parallel flow channels. The coolant flow reduction in one of the channels during the spontaneous recovery transient, after the conductor undergoes a transition from superconducting to resistive, necessitates a parallel channel analysis. A way to simulate the parallel channel analysis is described to calculate the initial channel inlet flow rate required for recovery after a given amount of heat is deposited. The recovery capability of a NbTi plus copper composite superconductor design is analyzed and the results presented. If the hydraulics of the coolant flow is neglected in the recovery analysis, the recovery capability of the superconductor will be over-predicted

  5. PIE Nacelle Flow Analysis and TCA Inlet Flow Quality Assessment

    Science.gov (United States)

    Shieh, C. F.; Arslan, Alan; Sundaran, P.; Kim, Suk; Won, Mark J.

    1999-01-01

    This presentation includes three topics: (1) Analysis of isolated boattail drag; (2) Computation of Technology Concept Airplane (TCA)-installed nacelle effects on aerodynamic performance; and (3) Assessment of TCA inlet flow quality.

  6. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  7. CFD Analysis of Random Turbulent Flow Load in Steam Generator of APR1400 Under Normal Operation Condition

    International Nuclear Information System (INIS)

    Lim, Sang Gyu; You, Sung Chang; Kim, Han Gon

    2011-01-01

    Regulatory guide 1.20 revision 3 of the Nuclear Regulatory Committee (NRC) modifies guidance for vibration assessments of reactor internals and steam generator internals. The new guidance requires applicants to provide a preliminary analysis and evaluation of the design and performance of a facility, including the safety margins of during normal operation and transient conditions anticipated during the life of the facility. Especially, revision 3 require rigorous assessments of adverse flow effects in the steam dryer cased by flow-excited acoustic and structural resonances such as the abnormality from power-uprated BWR cases. For two nearly identical nuclear power plants, the steam system of one BWR plant experienced failure of steam dryers and the main steam system components when steam flow was increased by 16 percent for extended power uprate (EPU). The mechanisms of those failures have revealed that a small adverse flow changing from the prototype condition induced severe flow-excited acoustic and structural resonances, leading to structural failures. In accordance with the historical background, therefore, potential adverse flow effects should be evaluated rigorously for steam generator internals in both BWR and Pressurized Water Reactor (PWR). The Advanced Power Reactor 1400 (APR1400), an evolutionary light water reactor, increased the power by 7.7 percent from the design of the 'Valid Prototype', System80+. Thus, reliable evaluations of potential adverse flow effects on the steam generator of APR1400 are necessary according to the regulatory guide. This paper is part of the computational fluid dynamics (CFD) analysis results for evaluation of the adverse flow effect for the steam generator internals of APR1400, including a series of sensitivity analyses to enhance the reliability of CFD analysis and an estimation the effect of flow loads on the internals of the steam generator under normal operation conditions

  8. Beyond sensitivity analysis

    DEFF Research Database (Denmark)

    Lund, Henrik; Sorknæs, Peter; Mathiesen, Brian Vad

    2018-01-01

    of electricity, which have been introduced in recent decades. These uncertainties pose a challenge to the design and assessment of future energy strategies and investments, especially in the economic assessment of renewable energy versus business-as-usual scenarios based on fossil fuels. From a methodological...... point of view, the typical way of handling this challenge has been to predict future prices as accurately as possible and then conduct a sensitivity analysis. This paper includes a historical analysis of such predictions, leading to the conclusion that they are almost always wrong. Not only...... are they wrong in their prediction of price levels, but also in the sense that they always seem to predict a smooth growth or decrease. This paper introduces a new method and reports the results of applying it on the case of energy scenarios for Denmark. The method implies the expectation of fluctuating fuel...

  9. Stress Analysis of Fuel Rod under Axial Coolant Flow

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Hai Lan; Lee, Young Shin; Lee, Hyun Seung [Chungnam National University, Daejeon (Korea, Republic of); Park, Num Kyu; Jeon, Kyung Rok [Kerea Nuclear Fuel., Daejeon (Korea, Republic of)

    2010-05-15

    A pressurized water reactor(PWR) fuel assembly, is a typical bundle structure, which uses light water as a coolant in most commercial nuclear power plants. Fuel rods that have a very slender and long clad are supported by fuel assembly which consists of several spacer grids. A coolant is a fluid which flows through device to prevent its overheating, transferring the heat produced by the device to other devices that use or dissipate it. But at the same time, the coolant flow will bring out the fluid induced vibration(FIV) of fuel rods and even damaged the fuel rod. This study has been conducted to investigate the flow characteristics and nuclear reactor fuel rod stress under effect of coolant. Fluid structure interaction(FSI) analysis on nuclear reactor fuel rod was performed. Fluid analysis of the coolant which flow along the axial direction and structural analysis under effect of flow velocity were carried out under different output flow velocity conditions

  10. Stress Analysis of Fuel Rod under Axial Coolant Flow

    International Nuclear Information System (INIS)

    Jin, Hai Lan; Lee, Young Shin; Lee, Hyun Seung; Park, Num Kyu; Jeon, Kyung Rok

    2010-01-01

    A pressurized water reactor(PWR) fuel assembly, is a typical bundle structure, which uses light water as a coolant in most commercial nuclear power plants. Fuel rods that have a very slender and long clad are supported by fuel assembly which consists of several spacer grids. A coolant is a fluid which flows through device to prevent its overheating, transferring the heat produced by the device to other devices that use or dissipate it. But at the same time, the coolant flow will bring out the fluid induced vibration(FIV) of fuel rods and even damaged the fuel rod. This study has been conducted to investigate the flow characteristics and nuclear reactor fuel rod stress under effect of coolant. Fluid structure interaction(FSI) analysis on nuclear reactor fuel rod was performed. Fluid analysis of the coolant which flow along the axial direction and structural analysis under effect of flow velocity were carried out under different output flow velocity conditions

  11. Risk Characterization uncertainties associated description, sensitivity analysis

    International Nuclear Information System (INIS)

    Carrillo, M.; Tovar, M.; Alvarez, J.; Arraez, M.; Hordziejewicz, I.; Loreto, I.

    2013-01-01

    The power point presentation is about risks to the estimated levels of exposure, uncertainty and variability in the analysis, sensitivity analysis, risks from exposure to multiple substances, formulation of guidelines for carcinogenic and genotoxic compounds and risk subpopulations

  12. Sensitivity of break-flow-partition on the containment pressure and temperature

    International Nuclear Information System (INIS)

    Kwon, Young Min; Song, Jin Ho; Lee, Sang Yong

    1994-01-01

    For the case of RCS blowdown into the vapor region of a containment at low pressure, the blowdown mixture will start to boil at the containment pressure and liquid will separate from the flow near the break location. The flashed steam is added to the containment atmosphere and liquid is falled to the sump. Analytically, the break flow can be divided into steam and liquid in a number of ways. Discussed in this study is three partition models and Instantaneous Mixing(IM) Model employed in different containment analysis computer codes. IM model is employed in the CONTRANS code developed by ABB-CE for containment thermodynamic analysis. The various partition models were applied to the double ended discharge leg slot break (DEDLS) LOCA which is containment design base accident (CDBA) for Ulchin 3 and 4 PSAR. It was shown that IM model is the most conservative for containment design pressure analysis. Results of the CONTRANS analyses are compared with those of UCN PSAR for which CONTEMPT-LT code was used

  13. Overview of methods for uncertainty analysis and sensitivity analysis in probabilistic risk assessment

    International Nuclear Information System (INIS)

    Iman, R.L.; Helton, J.C.

    1985-01-01

    Probabilistic Risk Assessment (PRA) is playing an increasingly important role in the nuclear reactor regulatory process. The assessment of uncertainties associated with PRA results is widely recognized as an important part of the analysis process. One of the major criticisms of the Reactor Safety Study was that its representation of uncertainty was inadequate. The desire for the capability to treat uncertainties with the MELCOR risk code being developed at Sandia National Laboratories is indicative of the current interest in this topic. However, as yet, uncertainty analysis and sensitivity analysis in the context of PRA is a relatively immature field. In this paper, available methods for uncertainty analysis and sensitivity analysis in a PRA are reviewed. This review first treats methods for use with individual components of a PRA and then considers how these methods could be combined in the performance of a complete PRA. In the context of this paper, the goal of uncertainty analysis is to measure the imprecision in PRA outcomes of interest, and the goal of sensitivity analysis is to identify the major contributors to this imprecision. There are a number of areas that must be considered in uncertainty analysis and sensitivity analysis for a PRA: (1) information, (2) systems analysis, (3) thermal-hydraulic phenomena/fission product behavior, (4) health and economic consequences, and (5) display of results. Each of these areas and the synthesis of them into a complete PRA are discussed

  14. Sensitivity Analysis for Design Optimization Integrated Software Tools, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposed project is to provide a new set of sensitivity analysis theory and codes, the Sensitivity Analysis for Design Optimization Integrated...

  15. Application of Stochastic Sensitivity Analysis to Integrated Force Method

    Directory of Open Access Journals (Sweden)

    X. F. Wei

    2012-01-01

    Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.

  16. Carbon dioxide capture processes: Simulation, design and sensitivity analysis

    DEFF Research Database (Denmark)

    Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul

    2012-01-01

    equilibrium and associated property models are used. Simulations are performed to investigate the sensitivity of the process variables to change in the design variables including process inputs and disturbances in the property model parameters. Results of the sensitivity analysis on the steady state...... performance of the process to the L/G ratio to the absorber, CO2 lean solvent loadings, and striper pressure are presented in this paper. Based on the sensitivity analysis process optimization problems have been defined and solved and, a preliminary control structure selection has been made.......Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...

  17. A sensitive flow-batch system for on board determination of ultra-trace ammonium in seawater: Method development and shipboard application.

    Science.gov (United States)

    Zhu, Yong; Yuan, Dongxing; Huang, Yongming; Ma, Jian; Feng, Sichao

    2013-09-10

    Combining fluorescence detection with flow analysis and solid phase extraction (SPE), a highly sensitive and automatic flow system for measurement of ultra-trace ammonium in open ocean water was established. Determination was based on fluorescence detection of a typical product of o-phthaldialdehyde and ammonium. In this study, the fluorescence reaction product could be efficiently extracted onto an SPE cartridge (HLB, hydrophilic-lipophilic balance). The extracted fluorescence compounds were rapidly eluted with ethanol and directed into a flow cell for fluorescence detection. Compared with the common used fluorescence method, the proposed one offered the benefits of improved sensitivity, reduced reagent consumption, negligible salinity effect and lower cost. Experimental parameters were optimized using a univariate experimental design. Calibration curves, ranging from 1.67 to 300nM, were obtained with different reaction times. The recoveries were between 89.5 and 96.5%, and the detection limits in land-based and shipboard laboratories were 0.7 and 1.2nM, respectively. The relative standard deviation was 3.5% (n=5) for an aged seawater sample spiked with 20nM ammonium. Compared with the analytical results obtained using the indophenol blue method coupled to a long-path liquid waveguide capillary cell, the proposed method showed good agreement. The method had been applied on board during a South China Sea cruise in August 2012. A vertical profile of ammonium in the South East Asia Time-Series (SEATS, 18° N, 116° E) station was produced. The distribution of ammonium in the surface seawater of the Qiongdong upwelling in South China Sea is also presented. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  19. Channel flow analysis. [velocity distribution throughout blade flow field

    Science.gov (United States)

    Katsanis, T.

    1973-01-01

    The design of a proper blade profile requires calculation of the blade row flow field in order to determine the velocities on the blade surfaces. An analysis theory is presented for several methods used for this calculation and associated computer programs that were developed are discussed.

  20. Sensitivity analysis techniques applied to a system of hyperbolic conservation laws

    International Nuclear Information System (INIS)

    Weirs, V. Gregory; Kamm, James R.; Swiler, Laura P.; Tarantola, Stefano; Ratto, Marco; Adams, Brian M.; Rider, William J.; Eldred, Michael S.

    2012-01-01

    Sensitivity analysis is comprised of techniques to quantify the effects of the input variables on a set of outputs. In particular, sensitivity indices can be used to infer which input parameters most significantly affect the results of a computational model. With continually increasing computing power, sensitivity analysis has become an important technique by which to understand the behavior of large-scale computer simulations. Many sensitivity analysis methods rely on sampling from distributions of the inputs. Such sampling-based methods can be computationally expensive, requiring many evaluations of the simulation; in this case, the Sobol' method provides an easy and accurate way to compute variance-based measures, provided a sufficient number of model evaluations are available. As an alternative, meta-modeling approaches have been devised to approximate the response surface and estimate various measures of sensitivity. In this work, we consider a variety of sensitivity analysis methods, including different sampling strategies, different meta-models, and different ways of evaluating variance-based sensitivity indices. The problem we consider is the 1-D Riemann problem. By a careful choice of inputs, discontinuous solutions are obtained, leading to discontinuous response surfaces; such surfaces can be particularly problematic for meta-modeling approaches. The goal of this study is to compare the estimated sensitivity indices with exact values and to evaluate the convergence of these estimates with increasing samples sizes and under an increasing number of meta-model evaluations. - Highlights: ► Sensitivity analysis techniques for a model shock physics problem are compared. ► The model problem and the sensitivity analysis problem have exact solutions. ► Subtle details of the method for computing sensitivity indices can affect the results.

  1. Nanoparticle separation with a miniaturized asymmetrical flow field-flow fractionation cartridge

    Science.gov (United States)

    Müller, David; Cattaneo, Stefano; Meier, Florian; Welz, Roland; deMello, Andrew

    2015-07-01

    Asymmetrical Flow Field-Flow Fractionation (AF4) is a separation technique applicable to particles over a wide size range. Despite the many advantages of AF4, its adoption in routine particle analysis is somewhat limited by the large footprint of currently available separation cartridges, extended analysis times and significant solvent consumption. To address these issues, we describe the fabrication and characterization of miniaturized AF4 cartridges. Key features of the scale-down platform include simplified cartridge and reagent handling, reduced analysis costs and higher throughput capacities. The separation performance of the miniaturized cartridge is assessed using certified gold and silver nanoparticle standards. Analysis of gold nanoparticle populations indicates shorter analysis times and increased sensitivity compared to conventional AF4 separation schemes. Moreover, nanoparticulate titanium dioxide populations exhibiting broad size distributions are analyzed in a rapid and efficient manner. Finally, the repeatability and reproducibility of the miniaturized platform are investigated with respect to analysis time and separation efficiency.

  2. Evaluating sensitivity of unsaturated soil properties

    International Nuclear Information System (INIS)

    Abdel-Rahman, R.O.; El-Kamash, A.M.; Nagy, M.E.; Khalill, M.Y.

    2005-01-01

    The assessment of near surface disposal performance relay on numerical models of groundwater flow and contaminant transport. These models use the unsaturated soil properties as input parameters, which are subject to uncertainty due to measurements errors and the spatial variability in the subsurface environment. To ascertain how much the output of the model will depend on the unsaturated soil properties the parametric sensitivity analysis is used. In this paper, a parametric sensitivity analysis of the Van Genuchten moisture retention characteristic (VGMRC) model will be presented and conducted to evaluate the relative importance of the unsaturated soil properties under different pressure head values that represent various dry and wet conditions. (author)

  3. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    Science.gov (United States)

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  4. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    KAUST Repository

    Navarro, María

    2016-12-26

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  5. Information flow analysis of interactome networks.

    Directory of Open Access Journals (Sweden)

    Patrycja Vasilyev Missiuro

    2009-04-01

    Full Text Available Recent studies of cellular networks have revealed modular organizations of genes and proteins. For example, in interactome networks, a module refers to a group of interacting proteins that form molecular complexes and/or biochemical pathways and together mediate a biological process. However, it is still poorly understood how biological information is transmitted between different modules. We have developed information flow analysis, a new computational approach that identifies proteins central to the transmission of biological information throughout the network. In the information flow analysis, we represent an interactome network as an electrical circuit, where interactions are modeled as resistors and proteins as interconnecting junctions. Construing the propagation of biological signals as flow of electrical current, our method calculates an information flow score for every protein. Unlike previous metrics of network centrality such as degree or betweenness that only consider topological features, our approach incorporates confidence scores of protein-protein interactions and automatically considers all possible paths in a network when evaluating the importance of each protein. We apply our method to the interactome networks of Saccharomyces cerevisiae and Caenorhabditis elegans. We find that the likelihood of observing lethality and pleiotropy when a protein is eliminated is positively correlated with the protein's information flow score. Even among proteins of low degree or low betweenness, high information scores serve as a strong predictor of loss-of-function lethality or pleiotropy. The correlation between information flow scores and phenotypes supports our hypothesis that the proteins of high information flow reside in central positions in interactome networks. We also show that the ranks of information flow scores are more consistent than that of betweenness when a large amount of noisy data is added to an interactome. Finally, we

  6. Quantitative analysis of arterial flow properties for detection of non-calcified plaques in ECG-gated coronary CT angiography

    Science.gov (United States)

    Wei, Jun; Zhou, Chuan; Chan, Heang-Ping; Chughtai, Aamer; Agarwal, Prachi; Kuriakose, Jean; Hadjiiski, Lubomir; Patel, Smita; Kazerooni, Ella

    2015-03-01

    We are developing a computer-aided detection system to assist radiologists in detection of non-calcified plaques (NCPs) in coronary CT angiograms (cCTA). In this study, we performed quantitative analysis of arterial flow properties in each vessel branch and extracted flow information to differentiate the presence and absence of stenosis in a vessel segment. Under rest conditions, blood flow in a single vessel branch was assumed to follow Poiseuille's law. For a uniform pressure distribution, two quantitative flow features, the normalized arterial compliance per unit length (Cu) and the normalized volumetric flow (Q) along the vessel centerline, were calculated based on the parabolic Poiseuille solution. The flow features were evaluated for a two-class classification task to differentiate NCP candidates obtained by prescreening as true NCPs and false positives (FPs) in cCTA. For evaluation, a data set of 83 cCTA scans was retrospectively collected from 83 patient files with IRB approval. A total of 118 NCPs were identified by experienced cardiothoracic radiologists. The correlation between the two flow features was 0.32. The discriminatory ability of the flow features evaluated as the area under the ROC curve (AUC) was 0.65 for Cu and 0.63 for Q in comparison with AUCs of 0.56-0.69 from our previous luminal features. With stepwise LDA feature selection, volumetric flow (Q) was selected in addition to three other luminal features. With FROC analysis, the test results indicated a reduction of the FP rates to 3.14, 1.98, and 1.32 FPs/scan at sensitivities of 90%, 80%, and 70%, respectively. The study indicated that quantitative blood flow analysis has the potential to provide useful features for the detection of NCPs in cCTA.

  7. Allergen Sensitization Pattern by Sex: A Cluster Analysis in Korea.

    Science.gov (United States)

    Ohn, Jungyoon; Paik, Seung Hwan; Doh, Eun Jin; Park, Hyun-Sun; Yoon, Hyun-Sun; Cho, Soyun

    2017-12-01

    Allergens tend to sensitize simultaneously. Etiology of this phenomenon has been suggested to be allergen cross-reactivity or concurrent exposure. However, little is known about specific allergen sensitization patterns. To investigate the allergen sensitization characteristics according to gender. Multiple allergen simultaneous test (MAST) is widely used as a screening tool for detecting allergen sensitization in dermatologic clinics. We retrospectively reviewed the medical records of patients with MAST results between 2008 and 2014 in our Department of Dermatology. A cluster analysis was performed to elucidate the allergen-specific immunoglobulin (Ig)E cluster pattern. The results of MAST (39 allergen-specific IgEs) from 4,360 cases were analyzed. By cluster analysis, 39items were grouped into 8 clusters. Each cluster had characteristic features. When compared with female, the male group tended to be sensitized more frequently to all tested allergens, except for fungus allergens cluster. The cluster and comparative analysis results demonstrate that the allergen sensitization is clustered, manifesting allergen similarity or co-exposure. Only the fungus cluster allergens tend to sensitize female group more frequently than male group.

  8. Analysis of the brazilian scientific production about information flows

    Directory of Open Access Journals (Sweden)

    Danielly Oliveira Inomata

    2015-07-01

    Full Text Available Objective. This paper presents and discuss the concepts, contexts and applications involving information flows in organizations. Method. Systematic review, followed by a bibliometric analysis and system analysis. The systematic review aimed to search for, evaluate and review evidence about the research topic. The systematic review process comprised the following steps: 1 definition of keywords, 2 systematic review, 3 exploration and analysis of articles and 4 comparison and consolidation of results. Results. A bibliometric analysis aimed to provide a statement of the relevance of articles where the authors, dates of publications, citation index, and periodic keywords with higher occurrence. Conclusions. As survey results confirms the emphasis on information featured in the knowledge management process, and advancing years, it seems that the emphasis is on networks, ie, studies are turning to the operationalization and analysis of flows information networks. The literature produced demonstrates the relationship of information flow with its management, applied to different organizational contexts, including showing new trends in information science as the study and analysis of information flow in networks.

  9. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  10. Beyond the GUM: variance-based sensitivity analysis in metrology

    International Nuclear Information System (INIS)

    Lira, I

    2016-01-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand. (paper)

  11. Control Flow Analysis for BioAmbients

    DEFF Research Database (Denmark)

    Nielson, Flemming; Nielson, Hanne Riis; Priami, C.

    2007-01-01

    This paper presents a static analysis for investigating properties of biological systems specified in BioAmbients. We exploit the control flow analysis to decode the bindings of variables induced by communications and to build a relation of the ambients that can interact with each other. We...

  12. Rethinking Sensitivity Analysis of Nuclear Simulations with Topology

    Energy Technology Data Exchange (ETDEWEB)

    Dan Maljovec; Bei Wang; Paul Rosen; Andrea Alfonsi; Giovanni Pastore; Cristian Rabiti; Valerio Pascucci

    2016-01-01

    In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of the sensitivity information -- inherently lacking in visual encodings -- offering limited effectiveness in communicating and exploring the generated data. In this paper, we design a framework for sensitivity analysis and visualization of multidimensional nuclear simulation data using partition-based, topology-inspired regression models and report on its efficacy. We rely on the established Morse-Smale regression technique, which allows us to partition the domain into monotonic regions where easily interpretable linear models can be used to assess the influence of inputs on the output variability. The underlying computation is augmented with an intuitive and interactive visual design to effectively communicate sensitivity information to the nuclear scientists. Our framework is being deployed into the multi-purpose probabilistic risk assessment and uncertainty quantification framework RAVEN (Reactor Analysis and Virtual Control Environment). We evaluate our framework using an simulation dataset studying nuclear fuel performance.

  13. Recurrence network analysis of experimental signals from bubbly oil-in-water flows

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Zhong-Ke; Zhang, Xin-Wang; Du, Meng [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Jin, Ning-De, E-mail: ndjin@tju.edu.cn [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2013-02-04

    Based on the signals from oil–water two-phase flow experiment, we construct and analyze recurrence networks to characterize the dynamic behavior of different flow patterns. We first take a chaotic time series as an example to demonstrate that the local property of recurrence network allows characterizing chaotic dynamics. Then we construct recurrence networks for different oil-in-water flow patterns and investigate the local property of each constructed network, respectively. The results indicate that the local topological statistic of recurrence network is very sensitive to the transitions of flow patterns and allows uncovering the dynamic flow behavior associated with chaotic unstable periodic orbits.

  14. Recurrence network analysis of experimental signals from bubbly oil-in-water flows

    International Nuclear Information System (INIS)

    Gao, Zhong-Ke; Zhang, Xin-Wang; Du, Meng; Jin, Ning-De

    2013-01-01

    Based on the signals from oil–water two-phase flow experiment, we construct and analyze recurrence networks to characterize the dynamic behavior of different flow patterns. We first take a chaotic time series as an example to demonstrate that the local property of recurrence network allows characterizing chaotic dynamics. Then we construct recurrence networks for different oil-in-water flow patterns and investigate the local property of each constructed network, respectively. The results indicate that the local topological statistic of recurrence network is very sensitive to the transitions of flow patterns and allows uncovering the dynamic flow behavior associated with chaotic unstable periodic orbits.

  15. Mean streamline analysis for performance prediction of cross-flow fans

    International Nuclear Information System (INIS)

    Kim, Jae Won; Oh, Hyoung Woo

    2004-01-01

    This paper presents the mean streamline analysis using the empirical loss correlations for performance prediction of cross-flow fans. Comparison of overall performance predictions with test data of a cross-flow fan system with a simplified vortex wall scroll casing and with the published experimental characteristics for a cross-flow fan has been carried out to demonstrate the accuracy of the proposed method. Predicted performance curves by the present mean streamline analysis agree well with experimental data for two different cross-flow fans over the normal operating conditions. The prediction method presented herein can be used efficiently as a tool for the preliminary design and performance analysis of general-purpose cross-flow fans

  16. Effects of flow rate on crack growth in sensitized type 304 stainless steel in high-temperature aqueous solutions

    International Nuclear Information System (INIS)

    Kwon, H.S.; Wuensche, A.; Macdonald, D.D.

    2000-01-01

    Intergranular stress corrosion cracking (IGSCC) in weld-sensitized, Type 304 (UNS S30400) (1) stainless steel (SS) remains a major threat to the integrity of heat transport circuits (HTC) in boiling water reactors (BWR), in spite of extensive research over the last 30 years. Effects of flow rate on intergranular crack growth in sensitized Type 304 stainless steel (UNS S30400) in distilled water containing 15 ppm or 25 ppm (2.59 x 10 -4 or 4.31 x 10 -4 m) sodium chloride (NaCl) at 250 C were examined using compact tension (CT) specimens under constant loading conditions. On increasing the flow rate, the crack growth rate (CGR) drastically increased, but later decreased to a level that was lower than the initial value. The initial increase in CGR was attributed to an enhanced rate of mass transfer of oxygen to the external surface, where it consumed the current emanating from the crack mouth. However, the subsequent decrease in CGR was attributed to crack flushing, which is a delayed process because of the time required to destroy the aggressive conditions that exist within the crack. Once flushing destroyed the aggressive crack environment, CGR decreased with increasing flow rate. The time over which CGR increased after an increase in the flow rate depended on how fast crack flushing occurred by fluid flow; the higher the flow rate and the greater the crack opening, the faster the crack flushing and the shorter the transition time. Finally, intergranular cracks propagated faster in regions nearer both sides of the Ct specimens, where the oxygen supply to the external surface was enhanced under stirring conditions and where minimal resistance existed to current flow from the crack tip to the external surfaces. This observation provided evidence that the crack's internal and external environments were coupled electrochemically

  17. Systemization of burnup sensitivity analysis code. 2

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2005-02-01

    Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of criticality experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons; the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For

  18. Parameter sensitivity analysis of a lumped-parameter model of a chain of lymphangions in series.

    Science.gov (United States)

    Jamalian, Samira; Bertram, Christopher D; Richardson, William J; Moore, James E

    2013-12-01

    Any disruption of the lymphatic system due to trauma or injury can lead to edema. There is no effective cure for lymphedema, partly because predictive knowledge of lymphatic system reactions to interventions is lacking. A well-developed model of the system could greatly improve our understanding of its function. Lymphangions, defined as the vessel segment between two valves, are the individual pumping units. Based on our previous lumped-parameter model of a chain of lymphangions, this study aimed to identify the parameters that affect the system output the most using a sensitivity analysis. The system was highly sensitive to minimum valve resistance, such that variations in this parameter caused an order-of-magnitude change in time-average flow rate for certain values of imposed pressure difference. Average flow rate doubled when contraction frequency was increased within its physiological range. Optimum lymphangion length was found to be some 13-14.5 diameters. A peak of time-average flow rate occurred when transmural pressure was such that the pressure-diameter loop for active contractions was centered near maximum passive vessel compliance. Increasing the number of lymphangions in the chain improved the pumping in the presence of larger adverse pressure differences. For a given pressure difference, the optimal number of lymphangions increased with the total vessel length. These results indicate that further experiments to estimate valve resistance more accurately are necessary. The existence of an optimal value of transmural pressure may provide additional guidelines for increasing pumping in areas affected by edema.

  19. Visual Analysis of Inclusion Dynamics in Two-Phase Flow.

    Science.gov (United States)

    Karch, Grzegorz Karol; Beck, Fabian; Ertl, Moritz; Meister, Christian; Schulte, Kathrin; Weigand, Bernhard; Ertl, Thomas; Sadlo, Filip

    2018-05-01

    In single-phase flow visualization, research focuses on the analysis of vector field properties. In two-phase flow, in contrast, analysis of the phase components is typically of major interest. So far, visualization research of two-phase flow concentrated on proper interface reconstruction and the analysis thereof. In this paper, we present a novel visualization technique that enables the investigation of complex two-phase flow phenomena with respect to the physics of breakup and coalescence of inclusions. On the one hand, we adapt dimensionless quantities for a localized analysis of phase instability and breakup, and provide detailed inspection of breakup dynamics with emphasis on oscillation and its interplay with rotational motion. On the other hand, we present a parametric tightly linked space-time visualization approach for an effective interactive representation of the overall dynamics. We demonstrate the utility of our approach using several two-phase CFD datasets.

  20. Bistable flow spectral analysis. Repercussions on jet pumps

    International Nuclear Information System (INIS)

    Gavilan Moreno, C.J.

    2011-01-01

    Highlights: → The most important thing in this paper, is the spectral characterization of the bistable flow in a Nuclear Power Plant. → This paper goes deeper in the effect of the bistable flow over the jet pump and the induced vibrations. → The jet pump frequencies are very close to natural jet pump frequencies, in the 3rd and 6th mode. - Abstract: There have been many attempts at characterizing and predicting bistable flow in boiling water reactors (BWRs). Nevertheless, in most cases the results have only managed to develop models that analytically reproduce the phenomenon (). Modeling has been forensic in all cases, while the capacity of the model focus on determining the exclusion areas on the recirculation flow map. The bistability process is known by its effects given there is no clear definition of its causal process. In the 1980s, Hitachi technicians () managed to reproduce bistable flow in the laboratory by means of pipe geometry, similar to that which is found in recirculation loops. The result was that the low flow pattern is formed by the appearance of a quasi stationary, helicoidal vortex in the recirculation collector's branches. This vortex creates greater frictional losses than regions without vortices, at the same discharge pressure. Neither the behavior nor the dynamics of these vortices were characterized in this paper. The aim of this paper is to characterize these vortices in such a way as to enable them to provide their own frequencies and their later effect on the jet pumps. The methodology used in this study is similar to the one used previously when analyzing the bistable flow in tube arrays with cross flow (). The method employed makes use of the power spectral density function. What differs is the field of application. We will analyze a Loop B with a bistable flow and compare the high and low flow situations. The same analysis will also be carried out on the loop that has not developed the bistable flow (Loop A) at the same moments

  1. Performance analysis of vortex based mixers for confined flows

    Science.gov (United States)

    Buschhagen, Timo

    The hybrid rocket is still sparsely employed within major space or defense projects due to their relatively poor combustion efficiency and low fuel grain regression rate. Although hybrid rockets can claim advantages in safety, environmental and performance aspects against established solid and liquid propellant systems, the boundary layer combustion process and the diffusion based mixing within a hybrid rocket grain port leaves the core flow unmixed and limits the system performance. One principle used to enhance the mixing of gaseous flows is to induce streamwise vorticity. The counter-rotating vortex pair (CVP) mixer utilizes this principle and introduces two vortices into a confined flow, generating a stirring motion in order to transport near wall media towards the core and vice versa. Recent studies investigated the velocity field introduced by this type of swirler. The current work is evaluating the mixing performance of the CVP concept, by using an experimental setup to simulate an axial primary pipe flow with a radially entering secondary flow. Hereby the primary flow is altered by the CVP swirler unit. The resulting setup therefore emulates a hybrid rocket motor with a cylindrical single port grain. In order to evaluate the mixing performance the secondary flow concentration at the pipe assembly exit is measured, utilizing a pressure-sensitive paint based procedure.

  2. Multidimensional analysis of fluid flow in the loft cold leg blowdown pipe during a loss-of-coolant experiment

    International Nuclear Information System (INIS)

    Demmie, P.N.; Hofmann, K.R.

    1979-03-01

    A computer analysis of fluid flow in the Loss-of-Fluid Test (LOFT) cold leg blowdown pipe during a loss-of-coolant experiment (LOCE) was performed using the computer program K-FIX/MOD1. The purpose of this analysis was to evaluate the capability of K-FIX/MOD1 to calculate theoretical fluid quantity distributions in the blowdown pipe during a LOCE for possible application to the analysis of LOFT experimental data, the determination of mass flow, or the development of data reduction models. A rectangular section of a portion of the LOFT blowdown pipe containing measurement Station BL-1 was modeled using time-dependent boundary conditions. Fluid quantities were calculated during a simulation of the first 26 s of LOFT LOCE L1-4. Sensitivity studies were made to determine changes in void fractions and velocities resulting from specific changes in the inflow boundary conditions used for this simulation

  3. Low flow analysis of the lower Drava River

    International Nuclear Information System (INIS)

    Mijuskovic-Svetinovic, T; Maricic, S

    2008-01-01

    Understanding the regime and the characteristics of low streamflows is of vital importance in several aspects. It is essential for the effective planning, designing, constructing, maintaining, using and managing different water management systems and structures. In addition, frequent running and assessing of estimates of low stream-flow statistics are especially important when different aspects of water quality are considered. This paper attempts to present the results of a stochastic analysis of the River Drava low flow from the gauging station, Donji Miholjac [located at rkm 77+700]. Currently, almost all specialists apply the truncation method in low-flows analysis. Taking this into consideration, it is possible to accept the definition of a low streamflow, as a period when the analysed characteristics are either, equal to or lower than the truncation level of drought. The same method has been applied in this analysis. The calculating method applied takes into account all the essential components of the afore-mentioned process. This includes a number of elements, such as the deficit, duration or the time of the occurrence of low flows, the number of times, the maximum deficit and the maximum duration of the low flows in the analysed time period. Moreover, this paper determines computational values for deficits and for the duration of low flow in different return periods.

  4. Code development for analysis of MHD pressure drop reduction in a liquid metal blanket using insulation technique based on a fully developed flow model

    International Nuclear Information System (INIS)

    Smolentsev, Sergey; Morley, Neil; Abdou, Mohamed

    2005-01-01

    The paper presents details of a new numerical code for analysis of a fully developed MHD flow in a channel of a liquid metal blanket using various insulation techniques. The code has specially been designed for channels with a 'sandwich' structure of several materials with different physical properties. The code includes a finite-volume formulation, automatically generated Hartmann number sensitive meshes, and effective convergence acceleration technique. Tests performed at Ha ∼ 10 4 have showed very good accuracy. As an illustration, two blanket flows have been considered: Pb-17Li flow in a channel with a silicon carbide flow channel insert, and Li flow in a channel with insulating coating

  5. Analysis of liver blood flow by dynamic hepatic scintigraphy

    International Nuclear Information System (INIS)

    Xie Tianhao; Jia Shiquan

    1996-01-01

    Liver blood flow was studied in 45 patients with solitary malignant liver cancer, 17 patients with multiple liver metastases, 18 patients with benign liver tumor and 20 control subjects by dynamic hepatic scintigraphy. The hepatic perfusion index (HPI) in control subjects, patients with liver malignant cancer and benign tumor was 0.33 +- 0.069, 0.589 +- 0.084, 0.384 +-0.046 respectively, and the mesenteric fraction (MF) was 0.56 +- 0.054, 0.246 +- 0.064, 0.524 +- 0.086 respectively. In conclusion, flow scintigraphy is a non-invasive, sensitive and repeatable method for detection of liver tumor

  6. Application of perturbation methods for sensitivity analysis for nuclear power plant steam generators; Aplicacao da teoria de perturbacao a analise de sensibilidade em geradores de vapor de usinas nucleares

    Energy Technology Data Exchange (ETDEWEB)

    Gurjao, Emir Candeia

    1996-02-01

    The differential and GPT (Generalized Perturbation Theory) formalisms of the Perturbation Theory were applied in this work to a simplified U-tubes steam generator model to perform sensitivity analysis. The adjoint and importance equations, with the corresponding expressions for the sensitivity coefficients, were derived for this steam generator model. The system was numerically was numerically solved in a Fortran program, called GEVADJ, in order to calculate the sensitivity coefficients. A transient loss of forced primary coolant in the nuclear power plant Angra-1 was used as example case. The average and final values of functionals: secondary pressure and enthalpy were studied in relation to changes in the secondary feedwater flow, enthalpy and total volume in secondary circuit. Absolute variations in the above functionals were calculated using the perturbative methods, considering the variations in the feedwater flow and total secondary volume. Comparison with the same variations obtained via direct model showed in general good agreement, demonstrating the potentiality of perturbative methods for sensitivity analysis of nuclear systems. (author) 22 refs., 7 figs., 8 tabs.

  7. Substance flow analysis in Finland - Four case studies on N and P flows

    Energy Technology Data Exchange (ETDEWEB)

    Antikainen, R.

    2007-07-01

    Nitrogen (N) and phosphorus (P) are essential elements for all living organisms. However, in excess, they contribute to such environmental problems as aquatic and terrestrial eutrophication (N, P), acidification (N), global warming (N), groundwater pollution (N), depletion of stratospheric ozone (N), formulation of tropospheric ozone (N) and poor urban air quality (N). Globally, human action has multiplied the volume of N and P cycling since the onset of industrialization. Themultiplication is a result of intensified agriculture, increased energy consumption and population growth. Industrial ecology (IE) is a discipline, in which human interaction with the ecosystems is investigated using a systems analytical approach. The main idea behind IE is that industrial systems resemble ecosystems, and, like them, industrial systems can then be described using material, energy and information flows and stocks. Industrial systems are dependent on the resources provided by the biosphere, and these two cannot be separated from each other. When studying substance flows, the aims of the research from the viewpoint of IE can be, for instance, to elucidate the ways how the cycles of a certain substance could be more closed and how the flows of a certain substance could be decreased per unit of production (= dematerialization). IE uses analytical research tools such as material and substance flow analysis (MFA, SFA), energy flow analysis (EFA), life cycle assessment (LCA) and material input per service unit (MIPS). In Finland, N and P are studied widely in different ecosystems and environmental emissions. A holistic picture comparing different societal systems is, however, lacking. In this thesis, flows of N and P were examined in Finland using SFA in the following four subsystems: (I) forest industry and use of wood fuels, II) food production and consumption, III) energy, and IV) municipal waste. A detailed analysis at the end of the 1990s was performed. Furthermore, historical

  8. Dynamic Resonance Sensitivity Analysis in Wind Farms

    DEFF Research Database (Denmark)

    Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei

    2017-01-01

    (PFs) are calculated by critical eigenvalue sensitivity analysis versus the entries of the MIMO matrix. The PF analysis locates the most exciting bus of the resonances, where can be the best location to install the passive or active filters to reduce the harmonic resonance problems. Time...

  9. Random signal tomographical analysis of two-phase flow

    International Nuclear Information System (INIS)

    Han, P.; Wesser, U.

    1990-01-01

    This paper reports on radiation tomography which is a useful tool for studying the internal structures of two-phase flow. However, general tomography analysis gives only time-averaged results, hence much information is lost. As a result, it is sometimes difficult to identify the flow regime; for example, the time-averaged picture does not significantly change as an annual flow develops from a slug flow. A two-phase flow diagnostic technique based on random signal tomographical analysis is developed. It extracts more information by studying the statistical variation of the measured signal with time. Local statistical parameters, including mean value, variance, skewness and flatness etc., are reconstructed from the information obtained by a general tomography technique. More important information are provided by the results. Not only the void fraction can be easily calculated, but also the flow pattern can be identified more objectively and more accurately. The experimental setup is introduced. It consisted of a two-phase flow loop, an X-ray system, a fan-like five-beam detector system and a signal acquisition and processing system. In the experiment, for both horizontal and vertical test sections (aluminum and steel tube with Di/Do = 40/45 mm), different flow situations are realized by independently adjusting air and water mass flow. Through a glass tube connected with the test section, some typical flow patterns are visualized and used for comparing with the reconstruction results

  10. The EVEREST project: sensitivity analysis of geological disposal systems

    International Nuclear Information System (INIS)

    Marivoet, Jan; Wemaere, Isabelle; Escalier des Orres, Pierre; Baudoin, Patrick; Certes, Catherine; Levassor, Andre; Prij, Jan; Martens, Karl-Heinz; Roehlig, Klaus

    1997-01-01

    The main objective of the EVEREST project is the evaluation of the sensitivity of the radiological consequences associated with the geological disposal of radioactive waste to the different elements in the performance assessment. Three types of geological host formations are considered: clay, granite and salt. The sensitivity studies that have been carried out can be partitioned into three categories according to the type of uncertainty taken into account: uncertainty in the model parameters, uncertainty in the conceptual models and uncertainty in the considered scenarios. Deterministic as well as stochastic calculational approaches have been applied for the sensitivity analyses. For the analysis of the sensitivity to parameter values, the reference technique, which has been applied in many evaluations, is stochastic and consists of a Monte Carlo simulation followed by a linear regression. For the analysis of conceptual model uncertainty, deterministic and stochastic approaches have been used. For the analysis of uncertainty in the considered scenarios, mainly deterministic approaches have been applied

  11. Multiple predictor smoothing methods for sensitivity analysis: Example results

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  12. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  13. Multivariate recurrence network analysis for characterizing horizontal oil-water two-phase flow.

    Science.gov (United States)

    Gao, Zhong-Ke; Zhang, Xin-Wang; Jin, Ning-De; Marwan, Norbert; Kurths, Jürgen

    2013-09-01

    Characterizing complex patterns arising from horizontal oil-water two-phase flows is a contemporary and challenging problem of paramount importance. We design a new multisector conductance sensor and systematically carry out horizontal oil-water two-phase flow experiments for measuring multivariate signals of different flow patterns. We then infer multivariate recurrence networks from these experimental data and investigate local cross-network properties for each constructed network. Our results demonstrate that a cross-clustering coefficient from a multivariate recurrence network is very sensitive to transitions among different flow patterns and recovers quantitative insights into the flow behavior underlying horizontal oil-water flows. These properties render multivariate recurrence networks particularly powerful for investigating a horizontal oil-water two-phase flow system and its complex interacting components from a network perspective.

  14. Stereo Scene Flow for 3D Motion Analysis

    CERN Document Server

    Wedel, Andreas

    2011-01-01

    This book presents methods for estimating optical flow and scene flow motion with high accuracy, focusing on the practical application of these methods in camera-based driver assistance systems. Clearly and logically structured, the book builds from basic themes to more advanced concepts, culminating in the development of a novel, accurate and robust optic flow method. Features: reviews the major advances in motion estimation and motion analysis, and the latest progress of dense optical flow algorithms; investigates the use of residual images for optical flow; examines methods for deriving mot

  15. Water-Sensitivity Characteristics of Briquettes Made from High-Rank Coal

    Directory of Open Access Journals (Sweden)

    Geng Yunguang

    2016-01-01

    Full Text Available In order to study the water sensitivity characteristics of the coalbed methane (CBM reservoir in the southern Qinshui Basin, the scanning electron microscopy, mineral composition and the water sensitivity of main coalbed 3 cores were tested and analyzed. Because CBM reservoirs in this area are characterized by low porosity and low permeability, the common water sensitivity experiment of cores can’t be used, instead, the briquettes were chose for the test to analysis the water sensitivity of CBM reservoirs. Results show that: the degree of water sensitivity in the study area varies from week to moderate. The controlling factors of water sensitivity are clay mineral content and the occurrence type of clay minerals, permeability and liquid flow rate. The water sensitivity damage rate is positively correlated with clay mineral content and liquid flow rate, and is negatively correlated with core permeability. The water sensitivity of CBM reservoir exist two damage mechanisms, including static permeability decline caused by clay mineral hydration dilatation and dynamic permeability decline caused by dispersion/migration of clay minerals.

  16. Probability density adjoint for sensitivity analysis of the Mean of Chaos

    Energy Technology Data Exchange (ETDEWEB)

    Blonigan, Patrick J., E-mail: blonigan@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu

    2014-08-01

    Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.

  17. Development of a miniaturized mass-flow meter for an axial flow blood pump based on computational analysis.

    Science.gov (United States)

    Kosaka, Ryo; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi

    2011-09-01

    In order to monitor the condition of patients with implantable left ventricular assist systems (LVAS), it is important to measure pump flow rate continuously and noninvasively. However, it is difficult to measure the pump flow rate, especially in an implantable axial flow blood pump, because the power consumption has neither linearity nor uniqueness with regard to the pump flow rate. In this study, a miniaturized mass-flow meter for discharged patients with an implantable axial blood pump was developed on the basis of computational analysis, and was evaluated in in-vitro tests. The mass-flow meter makes use of centrifugal force produced by the mass-flow rate around a curved cannula. An optimized design was investigated by use of computational fluid dynamics (CFD) analysis. On the basis of the computational analysis, a miniaturized mass-flow meter made of titanium alloy was developed. A strain gauge was adopted as a sensor element. The first strain gauge, attached to the curved area, measured both static pressure and centrifugal force. The second strain gauge, attached to the straight area, measured static pressure. By subtracting the output of the second strain gauge from the output of the first strain gauge, the mass-flow rate was determined. In in-vitro tests using a model circulation loop, the mass-flow meter was compared with a conventional flow meter. Measurement error was less than ±0.5 L/min and average time delay was 0.14 s. We confirmed that the miniaturized mass-flow meter could accurately measure the mass-flow rate continuously and noninvasively.

  18. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    Science.gov (United States)

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  19. The analysis of exergy and cash flow

    International Nuclear Information System (INIS)

    Weimin, H.

    1989-01-01

    The paper presents the analysis of the economic content of exergy parameter and the thermodynamical analogy of the analysis of cash flow, and gives out the reasonable foundations of the analysis of heat economy. The thoughts of optimum design of the combination of heat economic analysis and investment policy are also put forward

  20. Sensitivity analysis and power for instrumental variable studies.

    Science.gov (United States)

    Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S

    2018-03-31

    In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.

  1. Complex network analysis in inclined oil–water two-phase flow

    International Nuclear Information System (INIS)

    Zhong-Ke, Gao; Ning-De, Jin

    2009-01-01

    Complex networks have established themselves in recent years as being particularly suitable and flexible for representing and modelling many complex natural and artificial systems. Oil–water two-phase flow is one of the most complex systems. In this paper, we use complex networks to study the inclined oil–water two-phase flow. Two different complex network construction methods are proposed to build two types of networks, i.e. the flow pattern complex network (FPCN) and fluid dynamic complex network (FDCN). Through detecting the community structure of FPCN by the community-detection algorithm based on K-means clustering, useful and interesting results are found which can be used for identifying three inclined oil–water flow patterns. To investigate the dynamic characteristics of the inclined oil–water two-phase flow, we construct 48 FDCNs under different flow conditions, and find that the power-law exponent and the network information entropy, which are sensitive to the flow pattern transition, can both characterize the nonlinear dynamics of the inclined oil–water two-phase flow. In this paper, from a new perspective, we not only introduce a complex network theory into the study of the oil–water two-phase flow but also indicate that the complex network may be a powerful tool for exploring nonlinear time series in practice. (general)

  2. Flow boiling in microgap channels experiment, visualization and analysis

    CERN Document Server

    Alam, Tamanna; Jin, Li-Wen

    2013-01-01

    Flow Boiling in Microgap Channels: Experiment, Visualization and Analysis presents an up-to-date summary of the details of the confined to unconfined flow boiling transition criteria, flow boiling heat transfer and pressure drop characteristics, instability characteristics, two phase flow pattern and flow regime map and the parametric study of microgap dimension. Advantages of flow boiling in microgaps over microchannels are also highlighted. The objective of this Brief is to obtain a better fundamental understanding of the flow boiling processes, compare the performance between microgap and c

  3. Sensitivity enhancement of 13C nuclei in 2D J-resolved NMR spectroscopy using a recycled-flow system

    International Nuclear Information System (INIS)

    Ha, S.T.K.; Lee, R.W.K.; Wilkins, C.L.

    1987-01-01

    Recycled-flow nuclear magnetic resonance for sensitivity enhancement in 1/2 spin nuclei has been reported previously, achieving several-fold signal enhancement. The success of the method depends upon premagnetization of nuclei prior to flowing into the detector region, obviating the need for delays following data acquisition to allow spin-lattice relaxation and reduce experiment time. The actual gains of sensitivity enhancement for 13 C- 1 H 2D J-resolved NMR using a recycled-flow method are evaluated. Possible enhancements for two types of J-resolved measurements, namely, one-bond 13 C- 1 H and long range J-resolved spectroscopy, are estimated using a simple Carr-Purcell spin-echo approach to quantify the 13 C signals. The pulse sequence is simply 90 0 -t /sub 1/2/-180 0 -t/sub 1/2/-AT-t/sub d/, where t/sub 1/2/ is half the evolution time, AT is the acquisition time, and t/sub d/ the experiment repetition time. In a static 2D NMR experiment, t/sub d/ usually must be the same order of the longest spin-lattice relaxation time (T 1 ) of nuclei. Quantitative measurements using a recycled-flow system indicate t/dub d/ can be reduced to a fraction of T 1 ; hence significant time savings can be achieved. Time-savings of between 2 and 25 can be anticipated for 2D spectroscopy under flow measurement conditions used in the present study. Other types of 2D NMR spectroscopy (autocorrelation and double quantum NMR) are discussed

  4. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  5. Sensitivity analysis of the reactor safety study. Final report

    International Nuclear Information System (INIS)

    Parkinson, W.J.; Rasmussen, N.C.; Hinkle, W.D.

    1979-01-01

    The Reactor Safety Study (RSS) or Wash 1400 developed a methodology estimating the public risk from light water nuclear reactors. In order to give further insights into this study, a sensitivity analysis has been performed to determine the significant contributors to risk for both the PWR and BWR. The sensitivity to variation of the point values of the failure probabilities reported in the RSS was determined for the safety systems identified therein, as well as for many of the generic classes from which individual failures contributed to system failures. Increasing as well as decreasing point values were considered. An analysis of the sensitivity to increasing uncertainty in system failure probabilities was also performed. The sensitivity parameters chosen were release category probabilities, core melt probability, and the risk parameters of early fatalities, latent cancers and total property damage. The latter three are adequate for describing all public risks identified in the RSS. The results indicate reductions of public risk by less than a factor of two for factor reductions in system or generic failure probabilities as high as one hundred. There also appears to be more benefit in monitoring the most sensitive systems to verify adherence to RSS failure rates than to backfitting present reactors. The sensitivity analysis results do indicate, however, possible benefits in reducing human error rates

  6. Sensitivity analysis for contagion effects in social networks

    Science.gov (United States)

    VanderWeele, Tyler J.

    2014-01-01

    Analyses of social network data have suggested that obesity, smoking, happiness and loneliness all travel through social networks. Individuals exert “contagion effects” on one another through social ties and association. These analyses have come under critique because of the possibility that homophily from unmeasured factors may explain these statistical associations and because similar findings can be obtained when the same methodology is applied to height, acne and head-aches, for which the conclusion of contagion effects seems somewhat less plausible. We use sensitivity analysis techniques to assess the extent to which supposed contagion effects for obesity, smoking, happiness and loneliness might be explained away by homophily or confounding and the extent to which the critique using analysis of data on height, acne and head-aches is relevant. Sensitivity analyses suggest that contagion effects for obesity and smoking cessation are reasonably robust to possible latent homophily or environmental confounding; those for happiness and loneliness are somewhat less so. Supposed effects for height, acne and head-aches are all easily explained away by latent homophily and confounding. The methodology that has been employed in past studies for contagion effects in social networks, when used in conjunction with sensitivity analysis, may prove useful in establishing social influence for various behaviors and states. The sensitivity analysis approach can be used to address the critique of latent homophily as a possible explanation of associations interpreted as contagion effects. PMID:25580037

  7. Sensitivity analysis of a Pelton hydropower station based on a novel approach of turbine torque

    International Nuclear Information System (INIS)

    Xu, Beibei; Yan, Donglin; Chen, Diyi; Gao, Xiang; Wu, Changzhi

    2017-01-01

    Highlights: • A novel approach of the turbine torque is proposed. • A unify model is capable of the dynamic characteristics of Pelton hydropower stations. • Sensitivity analysis from hydraulic parameters, mechanic parameters and electric parameters are performed. • Numerical simulations show the sensitivity ranges of the above three parameters. - Abstract: Hydraulic turbine generator units with long-running operation may cause the values of hydraulic, mechanic or electric parameters changing gradually, which brings a new challenge, namely that whether the operating stability of these units will be changed in the next thirty or forty years. This paper is an attempt to seek a relatively unified model for sensitivity analysis from three aspects: hydraulic parameters (turbine flow and turbine head), mechanic parameters (axis coordinates and axial misalignment) and electric parameters (generator speed and excitation current). First, a novel approach of the Pelton turbine torque is proposed, which can make connections between the hydraulic turbine governing system and the shafting system of the hydro-turbine generator unit. Moreover, the correctness of this approach is verified by comparing with other three models of hydropower stations. Second, this latter is analyzed to obtain the sensitivity of electric parameter (excitation current), the mechanic parameters (axial misalignment, upper guide bearing rigidity, lower guide bearing rigidity, and turbine guide bearing rigidity) on hydraulic parameters on the operating stability of the unit. In addition to this, some critical values and ranges are proposed. Finally, these results can provide some bases for the design and stable operation of Peltonhydropower stations.

  8. Sensitivity analysis of an Advanced Gas-cooled Reactor control rod model

    International Nuclear Information System (INIS)

    Scott, M.; Green, P.L.; O’Driscoll, D.; Worden, K.; Sims, N.D.

    2016-01-01

    Highlights: • A model was made of the AGR control rod mechanism. • The aim was to better understand the performance when shutting down the reactor. • The model showed good agreement with test data. • Sensitivity analysis was carried out. • The results demonstrated the robustness of the system. - Abstract: A model has been made of the primary shutdown system of an Advanced Gas-cooled Reactor nuclear power station. The aim of this paper is to explore the use of sensitivity analysis techniques on this model. The two motivations for performing sensitivity analysis are to quantify how much individual uncertain parameters are responsible for the model output uncertainty, and to make predictions about what could happen if one or several parameters were to change. Global sensitivity analysis techniques were used based on Gaussian process emulation; the software package GEM-SA was used to calculate the main effects, the main effect index and the total sensitivity index for each parameter and these were compared to local sensitivity analysis results. The results suggest that the system performance is resistant to adverse changes in several parameters at once.

  9. Power flow analysis for DC voltage droop controlled DC microgrids

    DEFF Research Database (Denmark)

    Li, Chendan; Chaudhary, Sanjay; Dragicevic, Tomislav

    2014-01-01

    This paper proposes a new algorithm for power flow analysis in droop controlled DC microgrids. By considering the droop control in the power flow analysis for the DC microgrid, when compared with traditional methods, more accurate analysis results can be obtained. The algorithm verification is ca...

  10. Deep Packet/Flow Analysis using GPUs

    Energy Technology Data Exchange (ETDEWEB)

    Gong, Qian [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Wu, Wenji [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); DeMar, Phil [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)

    2017-11-12

    Deep packet inspection (DPI) faces severe performance challenges in high-speed networks (40/100 GE) as it requires a large amount of raw computing power and high I/O throughputs. Recently, researchers have tentatively used GPUs to address the above issues and boost the performance of DPI. Typically, DPI applications involve highly complex operations in both per-packet and per-flow data level, often in real-time. The parallel architecture of GPUs fits exceptionally well for per-packet network traffic processing. However, for stateful network protocols such as TCP, their data stream need to be reconstructed in a per-flow level to deliver a consistent content analysis. Since the flow-centric operations are naturally antiparallel and often require large memory space for buffering out-of-sequence packets, they can be problematic for GPUs, whose memory is normally limited to several gigabytes. In this work, we present a highly efficient GPU-based deep packet/flow analysis framework. The proposed design includes a purely GPU-implemented flow tracking and TCP stream reassembly. Instead of buffering and waiting for TCP packets to become in sequence, our framework process the packets in batch and uses a deterministic finite automaton (DFA) with prefix-/suffix- tree method to detect patterns across out-of-sequence packets that happen to be located in different batches. In conclusion, evaluation shows that our code can reassemble and forward tens of millions of packets per second and conduct a stateful signature-based deep packet inspection at 55 Gbit/s using an NVIDIA K40 GPU.

  11. An analysis of the flow stress of a two-phase alloy system, Ti-6Al-4V

    International Nuclear Information System (INIS)

    Reed-Hill, R.E.; Iswaran, C.V.; Kaufman, M.J.

    1996-01-01

    An analysis of the tensile deformation behavior of a two-phase body-centered cubic (bcc)-hexagonal close-packed (hcp) alloy, Ti-6Al-4V, has been made. This has shown that the temperature dependence of the flow stress, the logarithm of the effective stress, and the strain-rate sensitivities can be described by simple analytical equations if the thermally activated strain-rate equation contains the Yokobori activation enthalpy H = H 0 ln (σ* 0 /σ*), where H 0 is a constant, σ* the effective stress, and σ* 0 its 0 K value. The flow stress-temperature plateau region (500 to 600 K) also can be rationalized analytically in terms of oxygen dynamic strain aging in the alpha phase

  12. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  13. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  14. Sensitivity analysis of a low-level waste environmental transport code

    International Nuclear Information System (INIS)

    Hiromoto, G.

    1989-01-01

    Results are presented from a sensivity analysis of a computer code designed to simulate the environmental transport of radionuclides buried at shallow land waste repositories. A sensitivity analysis methodology, based on the surface response replacement and statistic sensitivity estimators, was developed to address the relative importance of the input parameters on the model output. Response surface replacement for the model was constructed by stepwise regression, after sampling input vectors from range and distribution of the input variables, and running the code to generate the associated output data. Sensitivity estimators were compute using the partial rank correlation coefficients and the standardized rank regression coefficients. The results showed that the tecniques employed in this work provides a feasible means to perform a sensitivity analysis of a general not-linear environmental radionuclides transport models. (author) [pt

  15. To Examine effect of Flow Zone Generation Techniques for Numerical Flow Analysis in Hydraulic Turbine

    International Nuclear Information System (INIS)

    Hussain, M.; Khan, J.A.

    2004-01-01

    A numerical study of flow in distributor of Francis Turbine is carried out by using two different techniques of flow zone generation. Distributor of GAMM Francis Turbine is used for present calculation. In present work, flow is assumed to be periodic around the distributor in steady state conditions, therefore computational domain consists of only one blade channel (one stay vane and one guide vane). The distributor computational domain is bounded up stream by cylindrical and downstream by conical patches. The first one corresponds to the spiral casing outflow section, while the second one is considered to be the distributor outlet or runner inlet. Upper and lower surfaces are generated by the revolution of hub and shroud edges. Single connected and multiple connected techniques are considered to generate distributor flow zone for numerical flow analysis of GAMM Francis turbine. The tetrahedral meshes are generated in both the flow zones. Same boundary conditions are applied for both the equivalent flow zones. The three dimensional, laminar flow analysis for both the distributor flow zones of the GAMM Francis turbine operating at the best efficiency point is performed. Gambit and G- Turbo are used as a preprocessor while calculations are done by using Fluent. Finally, numerical results obtained on the distributor outlet are compared with the available experimental data to validate the two different methodologies and examine their accuracy. (author)

  16. Probabilistic and sensitivity analysis of Botlek Bridge structures

    Directory of Open Access Journals (Sweden)

    Králik Juraj

    2017-01-01

    Full Text Available This paper deals with the probabilistic and sensitivity analysis of the largest movable lift bridge of the world. The bridge system consists of six reinforced concrete pylons and two steel decks 4000 tons weight each connected through ropes with counterweights. The paper focuses the probabilistic and sensitivity analysis as the base of dynamic study in design process of the bridge. The results had a high importance for practical application and design of the bridge. The model and resistance uncertainties were taken into account in LHS simulation method.

  17. Cooperative Strategies for Maximum-Flow Problem in Uncertain Decentralized Systems Using Reliability Analysis

    Directory of Open Access Journals (Sweden)

    Hadi Heidari Gharehbolagh

    2016-01-01

    Full Text Available This study investigates a multiowner maximum-flow network problem, which suffers from risky events. Uncertain conditions effect on proper estimation and ignoring them may mislead decision makers by overestimation. A key question is how self-governing owners in the network can cooperate with each other to maintain a reliable flow. Hence, the question is answered by providing a mathematical programming model based on applying the triangular reliability function in the decentralized networks. The proposed method concentrates on multiowner networks which suffer from risky time, cost, and capacity parameters for each network’s arcs. Some cooperative game methods such as τ-value, Shapley, and core center are presented to fairly distribute extra profit of cooperation. A numerical example including sensitivity analysis and the results of comparisons are presented. Indeed, the proposed method provides more reality in decision-making for risky systems, hence leading to significant profits in terms of real cost estimation when compared with unforeseen effects.

  18. Mechanistic multidimensional analysis of horizontal two-phase flows

    International Nuclear Information System (INIS)

    Tselishcheva, Elena A.; Antal, Steven P.; Podowski, Michael Z.

    2010-01-01

    The purpose of this paper is to discuss the results of analysis of two-phase flow in horizontal tubes. Two flow situations have been considered: gas/liquid flow in a long straight pipe, and similar flow conditions in a pipe with 90 deg. elbow. The theoretical approach utilizes a multifield modeling concept. A complete three-dimensional two-phase flow model has been implemented in a state-of-the-art computational multiphase fluid dynamics (CMFD) computer code, NPHASE. The overall model has been tested parametrically. Also, the results of NPHASE simulations have been compared against experimental data for a pipe with 90 deg. elbow.

  19. A critical comparison of constant and pulsed flow systems exploiting gas diffusion.

    Science.gov (United States)

    Silva, Claudineia Rodrigues; Henriquez, Camelia; Frizzarin, Rejane Mara; Zagatto, Elias Ayres Guidetti; Cerda, Victor

    2016-02-01

    Considering the beneficial aspects arising from the implementation of pulsed flows in flow analysis, and the relevance of in-line gas diffusion as an analyte separation/concentration step, influence of flow pattern in flow systems with in-line gas diffusion was critically investigated. To this end, constant or pulsed flows delivered by syringe or solenoid pumps were exploited. For each flow pattern, two variants involving different interaction times of the donor with the acceptor streams were studied. In the first one, both the acceptor and donor streams were continuously flowing, whereas in the second one, the acceptor was stopped during the gas diffusion step. Four different volatile species (ammonia, ethanol, carbon dioxide and hydrogen sulfide) were selected as models. For the flow patterns and variants studied, the efficiencies of mass transport in the gas diffusion process were compared, and sensitivity, repeatability, sampling frequency and recorded peak shape were evaluated. Analysis of the results revealed that sensitivity is strongly dependent on the implemented variant, and that flow pattern is an important feature in flow systems with in-line gas diffusion. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Analysis of turbulence spectra in gas-liquid two-phase flow

    International Nuclear Information System (INIS)

    Kataoka, Isao; Besnard, D.C.; Serizawa, Akimi.

    1993-01-01

    An analysis was made on the turbulence spectra in bubbly flow. Basic equation for turbulence spectrum in bubbly flow was formulated considering the eddy disintegration induced by bubble. Based on the dimensional analysis and modeling of eddy disintegration by bubble, constitutive equations for eddy disintegration were derived. Using these equations, turbulence spectra in bubbly flow (showing -8/3 power) was successfully explained. (author)

  1. Is Investment-Cash flow Sensitivity a Good Measure of Financing Constraints? New Evidence from Indian Business Group Firms

    NARCIS (Netherlands)

    George, R.; Kabir, M.R.; Qian, J.

    2005-01-01

    Several studies use the investment - cash flow sensitivity as a measure of financing constraints while some others disagree.The source of this disparity lies mostly in differences in opinion regarding the segregation of severely financially constrained firms from less constrained ones.We examine

  2. Experimental Design for Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2001-01-01

    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as

  3. Understanding dynamics using sensitivity analysis: caveat and solution

    Science.gov (United States)

    2011-01-01

    Background Parametric sensitivity analysis (PSA) has become one of the most commonly used tools in computational systems biology, in which the sensitivity coefficients are used to study the parametric dependence of biological models. As many of these models describe dynamical behaviour of biological systems, the PSA has subsequently been used to elucidate important cellular processes that regulate this dynamics. However, in this paper, we show that the PSA coefficients are not suitable in inferring the mechanisms by which dynamical behaviour arises and in fact it can even lead to incorrect conclusions. Results A careful interpretation of parametric perturbations used in the PSA is presented here to explain the issue of using this analysis in inferring dynamics. In short, the PSA coefficients quantify the integrated change in the system behaviour due to persistent parametric perturbations, and thus the dynamical information of when a parameter perturbation matters is lost. To get around this issue, we present a new sensitivity analysis based on impulse perturbations on system parameters, which is named impulse parametric sensitivity analysis (iPSA). The inability of PSA and the efficacy of iPSA in revealing mechanistic information of a dynamical system are illustrated using two examples involving switch activation. Conclusions The interpretation of the PSA coefficients of dynamical systems should take into account the persistent nature of parametric perturbations involved in the derivation of this analysis. The application of PSA to identify the controlling mechanism of dynamical behaviour can be misleading. By using impulse perturbations, introduced at different times, the iPSA provides the necessary information to understand how dynamics is achieved, i.e. which parameters are essential and when they become important. PMID:21406095

  4. Calibration of regional palaeohydrogeology and sensitivity analysis using hydrochemistry data in site investigations

    International Nuclear Information System (INIS)

    Hunter, F.M.I.; Hartley, L.J.; Hoch, A.; Jackson, C.P.; McCarthy, R.; Marsic, N.; Gylling, B.

    2008-01-01

    A transient coupled regional model of groundwater flow and solute transport has been developed, which allows the use of hydrochemical data to calibrate the model input parameters. The methodology has been illustrated using examples from the Simpevarp area in south-eastern Sweden which is being considered for geological disposal of spent nuclear fuel. The 3-dimensional model includes descriptions of spatial heterogeneity, density driven flow, rock matrix diffusion and transport and mixing of different water types, and has been simulated between 8000 BC and 2000 AD. Present-day analyses of major elemental ions and stable isotopes have been used to calibrate the model, which has then been cross checked against measured hydraulic conductivities, and against the hydrochemical interpretation of reference water mixing fractions. The key hydrogeological model sensitivities have been identified using the calibrated model and are found to include high sensitivity to the top surface flow boundary condition, the influence of variations in fracture transmissivity in different orientations (anisotropy), spatial heterogeneity in the deterministic regional deformation zones and the spacing between water bearing fractures (in terms of its effect on matrix diffusion)

  5. Automated measurement and classification of pulmonary blood-flow velocity patterns using phase-contrast MRI and correlation analysis.

    Science.gov (United States)

    van Amerom, Joshua F P; Kellenberger, Christian J; Yoo, Shi-Joon; Macgowan, Christopher K

    2009-01-01

    An automated method was evaluated to detect blood flow in small pulmonary arteries and classify each as artery or vein, based on a temporal correlation analysis of their blood-flow velocity patterns. The method was evaluated using velocity-sensitive phase-contrast magnetic resonance data collected in vitro with a pulsatile flow phantom and in vivo in 11 human volunteers. The accuracy of the method was validated in vitro, which showed relative velocity errors of 12% at low spatial resolution (four voxels per diameter), but was reduced to 5% at increased spatial resolution (16 voxels per diameter). The performance of the method was evaluated in vivo according to its reproducibility and agreement with manual velocity measurements by an experienced radiologist. In all volunteers, the correlation analysis was able to detect and segment peripheral pulmonary vessels and distinguish arterial from venous velocity patterns. The intrasubject variability of repeated measurements was approximately 10% of peak velocity, or 2.8 cm/s root-mean-variance, demonstrating the high reproducibility of the method. Excellent agreement was obtained between the correlation analysis and radiologist measurements of pulmonary velocities, with a correlation of R2=0.98 (P<.001) and a slope of 0.99+/-0.01.

  6. Sensitivity Analysis of the Agricultural Policy/Environmental eXtender (APEX) for Phosphorus Loads in Tile-Drained Landscapes.

    Science.gov (United States)

    Ford, W; King, K; Williams, M; Williams, J; Fausey, N

    2015-07-01

    Numerical modeling is an economical and feasible approach for quantifying the effects of best management practices on dissolved reactive phosphorus (DRP) loadings from agricultural fields. However, tools that simulate both surface and subsurface DRP pathways are limited and have not been robustly evaluated in tile-drained landscapes. The objectives of this study were to test the ability of the Agricultural Policy/Environmental eXtender (APEX), a widely used field-scale model, to simulate surface and tile P loadings over management, hydrologic, biologic, tile, and soil gradients and to better understand the behavior of P delivery at the edge-of-field in tile-drained midwestern landscapes. To do this, a global, variance-based sensitivity analysis was performed, and model outputs were compared with measured P loads obtained from 14 surface and subsurface edge-of-field sites across central and northwestern Ohio. Results of the sensitivity analysis showed that response variables for DRP were highly sensitive to coupled interactions between presumed important parameters, suggesting nonlinearity of DRP delivery at the edge-of-field. Comparison of model results to edge-of-field data showcased the ability of APEX to simulate surface and subsurface runoff and the associated DRP loading at monthly to annual timescales; however, some high DRP concentrations and fluxes were not reflected in the model, suggesting the presence of preferential flow. Results from this study provide new insights into baseline tile DRP loadings that exceed thresholds for algal proliferation. Further, negative feedbacks between surface and subsurface DRP delivery suggest caution is needed when implementing DRP-based best management practices designed for a specific flow pathway. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  7. An adaptive Mantel-Haenszel test for sensitivity analysis in observational studies.

    Science.gov (United States)

    Rosenbaum, Paul R; Small, Dylan S

    2017-06-01

    In a sensitivity analysis in an observational study with a binary outcome, is it better to use all of the data or to focus on subgroups that are expected to experience the largest treatment effects? The answer depends on features of the data that may be difficult to anticipate, a trade-off between unknown effect-sizes and known sample sizes. We propose a sensitivity analysis for an adaptive test similar to the Mantel-Haenszel test. The adaptive test performs two highly correlated analyses, one focused analysis using a subgroup, one combined analysis using all of the data, correcting for multiple testing using the joint distribution of the two test statistics. Because the two component tests are highly correlated, this correction for multiple testing is small compared with, for instance, the Bonferroni inequality. The test has the maximum design sensitivity of two component tests. A simulation evaluates the power of a sensitivity analysis using the adaptive test. Two examples are presented. An R package, sensitivity2x2xk, implements the procedure. © 2016, The International Biometric Society.

  8. Exergoeconomic multi objective optimization and sensitivity analysis of a regenerative Brayton cycle

    International Nuclear Information System (INIS)

    Naserian, Mohammad Mahdi; Farahat, Said; Sarhaddi, Faramarz

    2016-01-01

    Highlights: • Finite time exergoeconomic multi objective optimization of a Brayton cycle. • Comparing the exergoeconomic and the ecological function optimization results. • Inserting the cost of fluid streams concept into finite-time thermodynamics. • Exergoeconomic sensitivity analysis of a regenerative Brayton cycle. • Suggesting the cycle performance curve drawing and utilization. - Abstract: In this study, the optimal performance of a regenerative Brayton cycle is sought through power maximization and then exergoeconomic optimization using finite-time thermodynamic concept and finite-size components. Optimizations are performed using genetic algorithm. In order to take into account the finite-time and finite-size concepts in current problem, a dimensionless mass-flow parameter is used deploying time variations. The decision variables for the optimum state (of multi objective exergoeconomic optimization) are compared to the maximum power state. One can see that the multi objective exergoeconomic optimization results in a better performance than that obtained with the maximum power state. The results demonstrate that system performance at optimum point of multi objective optimization yields 71% of the maximum power, but only with exergy destruction as 24% of the amount that is produced at the maximum power state and 67% lower total cost rate than that of the maximum power state. In order to assess the impact of the variation of the decision variables on the objective functions, sensitivity analysis is conducted. Finally, the cycle performance curve drawing according to exergoeconomic multi objective optimization results and its utilization, are suggested.

  9. High-sensitivity direct analysis of aflatoxins in peanuts and cereal matrices by ultra-performance liquid chromatography with fluorescence detection involving a large volume flow cell.

    Science.gov (United States)

    Oulkar, Dasharath; Goon, Arnab; Dhanshetty, Manisha; Khan, Zareen; Satav, Sagar; Banerjee, Kaushik

    2018-04-03

    This paper reports a sensitive and cost effective method of analysis for aflatoxins B1, B2, G1 and G2. The sample preparation method was primarily optimised in peanuts, followed by its validation in a range of peanut-processed products and cereal (rice, corn, millets) matrices. Peanut slurry [12.5 g peanut + 12.5 mL water] was extracted with methanol: water (8:2, 100 mL), cleaned through an immunoaffinity column and thereafter measured directly by ultra-performance liquid chromatography-fluorescence (UPLC-FLD) detection, within a chromatographic runtime of 5 minutes. The use of a large volume flow cell in the FLD nullified the requirement of any post-column derivatisation and provided the lowest ever reported limits of quantification of 0.025 for B1 and G1 and 0.01 μg/kg for B2 and G2. The single laboratory validation of the method provided acceptable selectivity, linearity, recovery and precision for reliable quantifications in all the test matrices as well as demonstrated compliance with the EC 401/2006 guidelines for analytical quality control of aflatoxins in foodstuffs.

  10. ASSESSMENT OF PLASTIC FLOWS AND STOCKS IN SERBIA USING MATERIAL FLOW ANALYSIS

    Directory of Open Access Journals (Sweden)

    Goran Vujić

    2010-01-01

    Full Text Available Material flow analysis (MFA was used to assess the amounts of plastic materials flows and stocks that are annually produced, consumed, imported, exported, collected, recycled, and disposed in the landfills in Serbia. The analysis revealed that approximatelly 269,000 tons of plastic materials are directly disposed in uncontrolled landfills in Serbia without any preatretment, and that siginificant amounts of these materials have already accumulated in the landfills. The substantial amounts of landfilled plastics represent not only a loss of valuable recourses, but also pose a seriuos treath to the environment and human health, and if the trend of direct plastic landfilling is continued, Serbia will face with grave consecequnces.

  11. Sensitivity analysis for improving nanomechanical photonic transducers biosensors

    International Nuclear Information System (INIS)

    Fariña, D; Álvarez, M; Márquez, S; Lechuga, L M; Dominguez, C

    2015-01-01

    The achievement of high sensitivity and highly integrated transducers is one of the main challenges in the development of high-throughput biosensors. The aim of this study is to improve the final sensitivity of an opto-mechanical device to be used as a reliable biosensor. We report the analysis of the mechanical and optical properties of optical waveguide microcantilever transducers, and their dependency on device design and dimensions. The selected layout (geometry) based on two butt-coupled misaligned waveguides displays better sensitivities than an aligned one. With this configuration, we find that an optimal microcantilever thickness range between 150 nm and 400 nm would increase both microcantilever bending during the biorecognition process and increase optical sensitivity to 4.8   ×   10 −2  nm −1 , an order of magnitude higher than other similar opto-mechanical devices. Moreover, the analysis shows that a single mode behaviour of the propagating radiation is required to avoid modal interference that could misinterpret the readout signal. (paper)

  12. Sensitivity Analysis of Weather Variables on Offsite Consequence Analysis Tools in South Korea and the United States

    Directory of Open Access Journals (Sweden)

    Min-Uk Kim

    2018-05-01

    Full Text Available We studied sensitive weather variables for consequence analysis, in the case of chemical leaks on the user side of offsite consequence analysis (OCA tools. We used OCA tools Korea Offsite Risk Assessment (KORA and Areal Location of Hazardous Atmospheres (ALOHA in South Korea and the United States, respectively. The chemicals used for this analysis were 28% ammonia (NH3, 35% hydrogen chloride (HCl, 50% hydrofluoric acid (HF, and 69% nitric acid (HNO3. The accident scenarios were based on leakage accidents in storage tanks. The weather variables were air temperature, wind speed, humidity, and atmospheric stability. Sensitivity analysis was performed using the Statistical Package for the Social Sciences (SPSS program for dummy regression analysis. Sensitivity analysis showed that impact distance was not sensitive to humidity. Impact distance was most sensitive to atmospheric stability, and was also more sensitive to air temperature than wind speed, according to both the KORA and ALOHA tools. Moreover, the weather variables were more sensitive in rural conditions than in urban conditions, with the ALOHA tool being more influenced by weather variables than the KORA tool. Therefore, if using the ALOHA tool instead of the KORA tool in rural conditions, users should be careful not to cause any differences in impact distance due to input errors of weather variables, with the most sensitive one being atmospheric stability.

  13. Sensitivity Analysis of Weather Variables on Offsite Consequence Analysis Tools in South Korea and the United States.

    Science.gov (United States)

    Kim, Min-Uk; Moon, Kyong Whan; Sohn, Jong-Ryeul; Byeon, Sang-Hoon

    2018-05-18

    We studied sensitive weather variables for consequence analysis, in the case of chemical leaks on the user side of offsite consequence analysis (OCA) tools. We used OCA tools Korea Offsite Risk Assessment (KORA) and Areal Location of Hazardous Atmospheres (ALOHA) in South Korea and the United States, respectively. The chemicals used for this analysis were 28% ammonia (NH₃), 35% hydrogen chloride (HCl), 50% hydrofluoric acid (HF), and 69% nitric acid (HNO₃). The accident scenarios were based on leakage accidents in storage tanks. The weather variables were air temperature, wind speed, humidity, and atmospheric stability. Sensitivity analysis was performed using the Statistical Package for the Social Sciences (SPSS) program for dummy regression analysis. Sensitivity analysis showed that impact distance was not sensitive to humidity. Impact distance was most sensitive to atmospheric stability, and was also more sensitive to air temperature than wind speed, according to both the KORA and ALOHA tools. Moreover, the weather variables were more sensitive in rural conditions than in urban conditions, with the ALOHA tool being more influenced by weather variables than the KORA tool. Therefore, if using the ALOHA tool instead of the KORA tool in rural conditions, users should be careful not to cause any differences in impact distance due to input errors of weather variables, with the most sensitive one being atmospheric stability.

  14. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    Science.gov (United States)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  15. Precessing rotating flows with additional shear: stability analysis.

    Science.gov (United States)

    Salhi, A; Cambon, C

    2009-03-01

    We consider unbounded precessing rotating flows in which vertical or horizontal shear is induced by the interaction between the solid-body rotation (with angular velocity Omega(0)) and the additional "precessing" Coriolis force (with angular velocity -epsilonOmega(0)), normal to it. A "weak" shear flow, with rate 2epsilon of the same order of the Poincaré "small" ratio epsilon , is needed for balancing the gyroscopic torque, so that the whole flow satisfies Euler's equations in the precessing frame (the so-called admissibility conditions). The base flow case with vertical shear (its cross-gradient direction is aligned with the main angular velocity) corresponds to Mahalov's [Phys. Fluids A 5, 891 (1993)] precessing infinite cylinder base flow (ignoring boundary conditions), while the base flow case with horizontal shear (its cross-gradient direction is normal to both main and precessing angular velocities) corresponds to the unbounded precessing rotating shear flow considered by Kerswell [Geophys. Astrophys. Fluid Dyn. 72, 107 (1993)]. We show that both these base flows satisfy the admissibility conditions and can support disturbances in terms of advected Fourier modes. Because the admissibility conditions cannot select one case with respect to the other, a more physical derivation is sought: Both flows are deduced from Poincaré's [Bull. Astron. 27, 321 (1910)] basic state of a precessing spheroidal container, in the limit of small epsilon . A Rapid distortion theory (RDT) type of stability analysis is then performed for the previously mentioned disturbances, for both base flows. The stability analysis of the Kerswell base flow, using Floquet's theory, is recovered, and its counterpart for the Mahalov base flow is presented. Typical growth rates are found to be the same for both flows at very small epsilon , but significant differences are obtained regarding growth rates and widths of instability bands, if larger epsilon values, up to 0.2, are considered. Finally

  16. Research on the flow field of undershot cross-flow water turbines using experiments and numerical analysis

    International Nuclear Information System (INIS)

    Nishi, Y; Inagaki, T; Li, Y; Omiya, R; Hatano, K

    2014-01-01

    The purpose of this research is to develop a water turbine appropriate for low-head open channels in order to effectively utilize the unused hydropower energy of rivers and agricultural waterways. The application of the cross-flow runner to open channels as an undershot water turbine has come under consideration and, to this end, a significant simplification was attained by removing the casings. However, the flow field of undershot cross-flow water turbines possesses free surfaces. This means that with the variation in the rotational speed, the water depth around the runner will change and flow field itself is significantly altered. Thus it is necessary to clearly understand the flow fields with free surfaces in order to improve the performance of this turbine. In this research, the performance of this turbine and the flow field were studied through experiments and numerical analysis. The experimental results on the performance of this turbine and the flow field were consistent with the numerical analysis. In addition, the inlet and outlet regions at the first and second stages of this water turbine were clarified

  17. First status report on regional groundwater flow modeling for the Palo Duro Basin, Texas

    International Nuclear Information System (INIS)

    Andrews, R.W.

    1984-12-01

    Regional groundwater flow within the principal hydrogeological units of the Palo Duro Basin is evaluated by developing a conceptual model of the flow regime in the shallow aquifers and the deep-basin brine aquifers and testing these models using a three-dimensional, finite-difference flow code. Semiquantitative sensitivity analysis (a limited parametric study) is conducted to define the system response to changes in hydrologic properties or boundary conditions. Adjoint sensitivity analysis is applied to the conceptualized flow regime in the Wolfcamp carbonate aquifer. All steps leading to the final results and conclusions are incorporated in this report. The available data utilized in this study are summarized. The specific conceptual models, defining the areal and vertical averaging of lithologic units, aquifer properties, fluid properties, and hydrologic boundary conditions, are described in detail. The results are delineated by the simulated potentiometric surfaces and tables summarizing areal and vertical boundary fluxes, Darcy velocities at specific points, and groundwater travel paths. Results from the adjoint sensitivity analysis included importance functions and sensitivity coefficients, using heads or the average Darcy velocities as the performance measures. The reported work is the first stage of an ongoing evaluation of two areas within the Palo Duro Basin as potantial repositories for high-level radioactive wastes. The results and conclusions should thus be considered preliminary and subject to modification with the collection of additional data. However, this report does provide a useful basis for describing the sensitivity and, to a lesser extent, the uncertainty of the present conceptualization of groundwater flow within the Palo Duro Basin

  18. Comparative analysis of minimal residual disease detection using four-color flow cytometry, consensus IgH-PCR, and quantitative IgH PCR in CLL after allogeneic and autologous stem cell transplantation.

    Science.gov (United States)

    Böttcher, S; Ritgen, M; Pott, C; Brüggemann, M; Raff, T; Stilgenbauer, S; Döhner, H; Dreger, P; Kneba, M

    2004-10-01

    The clinically most suitable method for minimal residual disease (MRD) detection in chronic lymphocytic leukemia is still controversial. We prospectively compared MRD assessment in 158 blood samples of 74 patients with CLL after stem cell transplantation (SCT) using four-color flow cytometry (MRD flow) in parallel with consensus IgH-PCR and ASO IgH real-time PCR (ASO IgH RQ-PCR). In 25 out of 106 samples (23.6%) with a polyclonal consensus IgH-PCR pattern, MRD flow still detected CLL cells, proving higher sensitivity of flow cytometry over PCR-genescanning with consensus IgH-primers. Of 92 samples, 14 (15.2%) analyzed in parallel by MRD flow and by ASO IgH RQ-PCR were negative by our flow cytometric assay but positive by PCR, thus demonstrating superior sensitivity of RQ-PCR with ASO primers. Quantitative MRD levels measured by both methods correlated well (r=0.93). MRD detection by flow and ASO IgH RQ-PCR were equally suitable to monitor MRD kinetics after allogeneic SCT, but the PCR method detected impending relapses after autologous SCT earlier. An analysis of factors that influence sensitivity and specificity of flow cytometry for MRD detection allowed to devise further improvements of this technique.

  19. Sensitivity Analysis of Oxide Scale Influence on General Carbon Steels during Hot Forging

    Directory of Open Access Journals (Sweden)

    Bernd-Arno Behrens

    2018-02-01

    Full Text Available Increasing product requirements have made numerical simulation into a vital tool for the time- and cost-efficient process design. In order to accurately model hot forging processes with finite, element-based numerical methods, reliable models are required, which take the material behaviour, surface phenomena of die and workpiece, and machine kinematics into account. In hot forging processes, the surface properties are strongly affected by the growth of oxide scale, which influences the material flow, friction, and product quality of the finished component. The influence of different carbon contents on material behaviour is investigated by considering three different steel grades (C15, C45, and C60. For a general description of the material behaviour, an empirical approach is used to implement mathematical functions for expressing the relationship between flow stress and dominant influence variables like alloying elements, initial microstructure, and reheating mode. The deformation behaviour of oxide scale is separately modelled for each component with parameterized flow curves. The main focus of this work lies in the consideration of different materials as well as the calculation and assignment of their material properties in dependence on current process parameters by application of subroutines. The validated model is used to carry out the influence of various oxide scale parameters, like the scale thickness and the composition, on the hot forging process. Therefore, selected parameters have been varied within a numerical sensitivity analysis. The results show a strong influence of oxide scale on the friction behaviour as well as on the material flow during hot forging.

  20. Comparison of increased venous contrast in ischemic stroke using phase-sensitive MR imaging with perfusion changes on flow-sensitive alternating inversion recovery at 3 Tesla

    International Nuclear Information System (INIS)

    Yamashita, Eijiro; Kanasaki, Yoshiko; Fujii, Shinya; Ogawa, Toshihide; Tanaka, Takuro; Hirata, Yoshiharu

    2011-01-01

    Background Increased venous contrast in ischemic stroke using susceptibility-weighted imaging has been widely reported, although few reports have compared increased venous contrast areas with perfusion change areas. Purpose To compare venous contrast on phase-sensitive MR images (PSI) with perfusion change on flow-sensitive alternating inversion recovery (FAIR) images, and to discuss the clinical use of PSI in ischemic stroke. Material and Methods Thirty patients with clinically suspected acute infarction of the middle cerebral artery (MCA) territory within 7 days of onset were evaluated. Phase-sensitive imaging (PSI), flow-sensitive alternating inversion recovery (FAIR), diffusion-weighted imaging (DWI) and magnetic resonance angiography (MRA) were obtained using 3 Tesla scanner. Two neuroradiologists independently reviewed the MR images, as well as the PSI, DWI, and FAIR images. They were blinded to the clinical data and to each other's findings. The abnormal area of each image was ultimately identified after both neuroradiologists reached consensus. We analyzed areas of increased venous contrast on PSI, perfusion changes on FAIR images and signal changes on DWI for each case. Results Venous contrast increased on PSI and hypoperfusion was evident on FAIR images from 22 of the 30 patients (73%). The distribution of the increased venous contrast was the same as that of the hypoperfused areas on FAIR images in 16 of these 22. The extent of these lesions was larger than that of lesions visualized by on DWI in 18 of the 22 patients. Hypointense signals reflecting hemorrhage and no increased venous contrast on PSI and hyperperfusion on FAIR images were found in six of the remaining eight patients (20%). Findings on PSI were normal and hypoperfusion areas were absent on FAIR images of two patients (7%). Conclusion Increased venous contrast on PSI might serve as an index of misery perfusion and provide useful information

  1. Two-phase flow characteristics analysis code: MINCS

    International Nuclear Information System (INIS)

    Watanabe, Tadashi; Hirano, Masashi; Akimoto, Masayuki; Tanabe, Fumiya; Kohsaka, Atsuo.

    1992-03-01

    Two-phase flow characteristics analysis code: MINCS (Modularized and INtegrated Code System) has been developed to provide a computational tool for analyzing two-phase flow phenomena in one-dimensional ducts. In MINCS, nine types of two-phase flow models-from a basic two-fluid nonequilibrium (2V2T) model to a simple homogeneous equilibrium (1V1T) model-can be used under the same numerical solution method. The numerical technique is based on the implicit finite difference method to enhance the numerical stability. The code structure is highly modularized, so that new constitutive relations and correlations can be easily implemented into the code and hence evaluated. A flow pattern can be fixed regardless of flow conditions, and state equations or steam tables can be selected. It is, therefore, easy to calculate physical or numerical benchmark problems. (author)

  2. Thermohydrodynamic analysis of cryogenic liquid turbulent flow fluid film bearings

    Science.gov (United States)

    Andres, Luis San

    1993-01-01

    A thermohydrodynamic analysis is presented and a computer code developed for prediction of the static and dynamic force response of hydrostatic journal bearings (HJB's), annular seals or damper bearing seals, and fixed arc pad bearings for cryogenic liquid applications. The study includes the most important flow characteristics found in cryogenic fluid film bearings such as flow turbulence, fluid inertia, liquid compressibility and thermal effects. The analysis and computational model devised allow the determination of the flow field in cryogenic fluid film bearings along with the dynamic force coefficients for rotor-bearing stability analysis.

  3. Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2007-01-01

    Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.

  4. Combination of material flow analysis and substance flow analysis: a powerful approach for decision support in waste management.

    Science.gov (United States)

    Stanisavljevic, Nemanja; Brunner, Paul H

    2014-08-01

    The novelty of this paper is the demonstration of the effectiveness of combining material flow analysis (MFA) with substance flow analysis (SFA) for decision making in waste management. Both MFA and SFA are based on the mass balance principle. While MFA alone has been applied often for analysing material flows quantitatively and hence to determine the capacities of waste treatment processes, SFA is more demanding but instrumental in evaluating the performance of a waste management system regarding the goals "resource conservation" and "environmental protection". SFA focuses on the transformations of wastes during waste treatment: valuable as well as hazardous substances and their transformations are followed through the entire waste management system. A substance-based approach is required because the economic and environmental properties of the products of waste management - recycling goods, residues and emissions - are primarily determined by the content of specific precious or harmful substances. To support the case that MFA and SFA should be combined, a case study of waste management scenarios is presented. For three scenarios, total material flows are quantified by MFA, and the mass flows of six indicator substances (C, N, Cl, Cd, Pb, Hg) are determined by SFA. The combined results are compared to the status quo in view of fulfilling the goals of waste management. They clearly point out specific differences between the chosen scenarios, demonstrating potentials for improvement and the value of the combination of MFA/SFA for decision making in waste management. © The Author(s) 2014.

  5. Comprehensive mechanisms for combustion chemistry: Experiment, modeling, and sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dryer, F.L.; Yetter, R.A. [Princeton Univ., NJ (United States)

    1993-12-01

    This research program is an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work are conducted in large diameter flow reactors, at pressures from one to twenty atmospheres, temperatures from 550 K to 1200 K, and with observed reaction times from 10{sup {minus}2} to 5 seconds. Gas sampling of stable reactant, intermediate, and product species concentrations provides not only substantial definition of the phenomenology of reaction mechanisms, but a significantly constrained set of kinetic information with negligible diffusive coupling. Analytical techniques used for detecting hydrocarbons and carbon oxides include gas chromatography (GC), and gas infrared (NDIR) and FTIR methods are utilized for continuous on-line sample detection of light absorption measurements of OH have also been performed in an atmospheric pressure flow reactor (APFR), and a variable pressure flow (VPFR) reactor is presently being instrumented to perform optical measurements of radicals and highly reactive molecular intermediates. The numerical aspects of the work utilize zero and one-dimensional pre-mixed, detailed kinetic studies, including path, elemental gradient sensitivity, and feature sensitivity analyses. The program emphasizes the use of hierarchical mechanistic construction to understand and develop detailed kinetic mechanisms. Numerical studies are utilized for guiding experimental parameter selections, for interpreting observations, for extending the predictive range of mechanism constructs, and to study the effects of diffusive transport coupling on reaction behavior in flames. Modeling using well defined and validated mechanisms for the CO/H{sub 2}/oxidant systems.

  6. Comparison of global sensitivity analysis methods – Application to fuel behavior modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ikonen, Timo, E-mail: timo.ikonen@vtt.fi

    2016-02-15

    Highlights: • Several global sensitivity analysis methods are compared. • The methods’ applicability to nuclear fuel performance simulations is assessed. • The implications of large input uncertainties and complex models are discussed. • Alternative strategies to perform sensitivity analyses are proposed. - Abstract: Fuel performance codes have two characteristics that make their sensitivity analysis challenging: large uncertainties in input parameters and complex, non-linear and non-additive structure of the models. The complex structure of the code leads to interactions between inputs that show as cross terms in the sensitivity analysis. Due to the large uncertainties of the inputs these interactions are significant, sometimes even dominating the sensitivity analysis. For the same reason, standard linearization techniques do not usually perform well in the analysis of fuel performance codes. More sophisticated methods are typically needed in the analysis. To this end, we compare the performance of several sensitivity analysis methods in the analysis of a steady state FRAPCON simulation. The comparison of importance rankings obtained with the various methods shows that even the simplest methods can be sufficient for the analysis of fuel maximum temperature. However, the analysis of the gap conductance requires more powerful methods that take into account the interactions of the inputs. In some cases, moment-independent methods are needed. We also investigate the computational cost of the various methods and present recommendations as to which methods to use in the analysis.

  7. Direct comparison of flow-FISH and qPCR as diagnostic tests for telomere length measurement in humans.

    Directory of Open Access Journals (Sweden)

    Fernanda Gutierrez-Rodrigues

    Full Text Available Telomere length measurement is an essential test for the diagnosis of telomeropathies, which are caused by excessive telomere erosion. Commonly used methods are terminal restriction fragment (TRF analysis by Southern blot, fluorescence in situ hybridization coupled with flow cytometry (flow-FISH, and quantitative PCR (qPCR. Although these methods have been used in the clinic, they have not been comprehensively compared. Here, we directly compared the performance of flow-FISH and qPCR to measure leukocytes' telomere length of healthy individuals and patients evaluated for telomeropathies, using TRF as standard. TRF and flow-FISH showed good agreement and correlation in the analysis of healthy subjects (R(2 = 0.60; p<0.0001 and patients (R(2 = 0.51; p<0.0001. In contrast, the comparison between TRF and qPCR yielded modest correlation for the analysis of samples of healthy individuals (R(2 = 0.35; p<0.0001 and low correlation for patients (R(2 = 0.20; p = 0.001; Bland-Altman analysis showed poor agreement between the two methods for both patients and controls. Quantitative PCR and flow-FISH modestly correlated in the analysis of healthy individuals (R(2 = 0.33; p<0.0001 and did not correlate in the comparison of patients' samples (R(2 = 0.1, p = 0.08. Intra-assay coefficient of variation (CV was similar for flow-FISH (10.8 ± 7.1% and qPCR (9.5 ± 7.4%; p = 0.35, but the inter-assay CV was lower for flow-FISH (9.6 ± 7.6% vs. 16 ± 19.5%; p = 0.02. Bland-Altman analysis indicated that flow-FISH was more precise and reproducible than qPCR. Flow-FISH and qPCR were sensitive (both 100% and specific (93% and 89%, respectively to distinguish very short telomeres. However, qPCR sensitivity (40% and specificity (63% to detect telomeres below the tenth percentile were lower compared to flow-FISH (80% sensitivity and 85% specificity. In the clinical setting, flow-FISH was more accurate, reproducible, sensitive, and specific in the measurement of human

  8. The cash-flow analysis of the firm

    OpenAIRE

    Mariana Man

    2001-01-01

    The analysis of economic and financial indicators of the firm regards the profit and loss account analysis and the balance sheet analysis. The cash-flow from operating activities represents the amount of cash obtained by a firm from selling goods and services after deducting the costs involved by raw materials, materials and processenig operations

  9. Development of the GO-FLOW reliability analysis methodology for nuclear reactor system

    International Nuclear Information System (INIS)

    Matsuoka, Takeshi; Kobayashi, Michiyuki

    1994-01-01

    Probabilistic Safety Assessment (PSA) is important in the safety analysis of technological systems and processes, such as, nuclear plants, chemical and petroleum facilities, aerospace systems. Event trees and fault trees are the basic analytical tools that have been most frequently used for PSAs. Several system analysis methods can be used in addition to, or in support of, the event- and fault-tree analysis. The need for more advanced methods of system reliability analysis has grown with the increased complexity of engineered systems. The Ship Research Institute has been developing a new reliability analysis methodology, GO-FLOW, which is a success-oriented system analysis technique, and is capable of evaluating a large system with complex operational sequences. The research has been supported by the special research fund for Nuclear Technology, Science and Technology Agency, from 1989 to 1994. This paper describes the concept of the Probabilistic Safety Assessment (PSA), an overview of various system analysis techniques, an overview of the GO-FLOW methodology, the GO-FLOW analysis support system, procedure of treating a phased mission problem, a function of common cause failure analysis, a function of uncertainty analysis, a function of common cause failure analysis with uncertainty, and printing out system of the results of GO-FLOW analysis in the form of figure or table. Above functions are explained by analyzing sample systems, such as PWR AFWS, BWR ECCS. In the appendices, the structure of the GO-FLOW analysis programs and the meaning of the main variables defined in the GO-FLOW programs are described. The GO-FLOW methodology is a valuable and useful tool for system reliability analysis, and has a wide range of applications. With the development of the total system of the GO-FLOW, this methodology has became a powerful tool in a living PSA. (author) 54 refs

  10. Probabilistic sensitivity analysis of system availability using Gaussian processes

    International Nuclear Information System (INIS)

    Daneshkhah, Alireza; Bedford, Tim

    2013-01-01

    The availability of a system under a given failure/repair process is a function of time which can be determined through a set of integral equations and usually calculated numerically. We focus here on the issue of carrying out sensitivity analysis of availability to determine the influence of the input parameters. The main purpose is to study the sensitivity of the system availability with respect to the changes in the main parameters. In the simplest case that the failure repair process is (continuous time/discrete state) Markovian, explicit formulae are well known. Unfortunately, in more general cases availability is often a complicated function of the parameters without closed form solution. Thus, the computation of sensitivity measures would be time-consuming or even infeasible. In this paper, we show how Sobol and other related sensitivity measures can be cheaply computed to measure how changes in the model inputs (failure/repair times) influence the outputs (availability measure). We use a Bayesian framework, called the Bayesian analysis of computer code output (BACCO) which is based on using the Gaussian process as an emulator (i.e., an approximation) of complex models/functions. This approach allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than other methods. The emulator-based sensitivity measure is used to examine the influence of the failure and repair densities' parameters on the system availability. We discuss how to apply the methods practically in the reliability context, considering in particular the selection of parameters and prior distributions and how we can ensure these may be considered independent—one of the key assumptions of the Sobol approach. The method is illustrated on several examples, and we discuss the further implications of the technique for reliability and maintenance analysis

  11. Using sparse polynomial chaos expansions for the global sensitivity analysis of groundwater lifetime expectancy in a multi-layered hydrogeological model

    International Nuclear Information System (INIS)

    Deman, G.; Konakli, K.; Sudret, B.; Kerrou, J.; Perrochet, P.; Benabderrahmane, H.

    2016-01-01

    The study makes use of polynomial chaos expansions to compute Sobol' indices within the frame of a global sensitivity analysis of hydro-dispersive parameters in a simplified vertical cross-section of a segment of the subsurface of the Paris Basin. Applying conservative ranges, the uncertainty in 78 input variables is propagated upon the mean lifetime expectancy of water molecules departing from a specific location within a highly confining layer situated in the middle of the model domain. Lifetime expectancy is a hydrogeological performance measure pertinent to safety analysis with respect to subsurface contaminants, such as radionuclides. The sensitivity analysis indicates that the variability in the mean lifetime expectancy can be sufficiently explained by the uncertainty in the petrofacies, i.e. the sets of porosity and hydraulic conductivity, of only a few layers of the model. The obtained results provide guidance regarding the uncertainty modeling in future investigations employing detailed numerical models of the subsurface of the Paris Basin. Moreover, the study demonstrates the high efficiency of sparse polynomial chaos expansions in computing Sobol' indices for high-dimensional models. - Highlights: • Global sensitivity analysis of a 2D 15-layer groundwater flow model is conducted. • A high-dimensional random input comprising 78 parameters is considered. • The variability in the mean lifetime expectancy for the central layer is examined. • Sparse polynomial chaos expansions are used to compute Sobol' sensitivity indices. • The petrofacies of a few layers can sufficiently explain the response variance.

  12. Air-segmented continuous-flow analysis for molybdenum in various geochemical samples

    International Nuclear Information System (INIS)

    Harita, Y.; Sugiyama, M.; Hori, T.

    2003-01-01

    An air-segmented continuous-flow method has been developed for the determination of molybdenum at ultra trace levels using the catalytic effect of molybdate during the oxidation of L-ascorbic acid by hydrogen peroxide. Incorporation of an on-line ion exchange column improved the tolerance limit for various ions. The detection limits with and without the column were 64 pmol L m1 and 17 pmol L m1 , and the reproducibilities at 10 nmol L m1 were 2.1 % and 0.2 %, respectively. The proposed method was applied to the determination of molybdenum in seawater and lake water as well as in rock and sediment samples. This method has the highest sensitivity among the available literature to our knowledge, and is also convenient for routine analysis of molybdenum in various natural samples. (author)

  13. Analysis of the cross flow in a radial inflow turbine scroll

    Science.gov (United States)

    Hamed, A.; Abdallah, S.; Tabakoff, W.

    1977-01-01

    Equations of motion were derived, and a computational procedure is presented, for determining the nonviscous flow characteristics in the cross-sectional planes of a curved channel due to continuous mass discharge or mass addition. An analysis was applied to the radial inflow turbine scroll to study the effects of scroll geometry and the through flow velocity profile on the flow behavior. The computed flow velocity component in the scroll cross-sectional plane, together with the through flow velocity profile which can be determined in a separate analysis, provide a complete description of the three dimensional flow in the scroll.

  14. GO-FLOW methodology. Basic concept and integrated analysis framework for its applications

    International Nuclear Information System (INIS)

    Matsuoka, Takeshi

    2010-01-01

    GO-FLOW methodology is a success oriented system analysis technique, and is capable of evaluating a large system with complex operational sequences. Recently an integrated analysis framework of the GO-FLOW has been developed for the safety evaluation of elevator systems by the Ministry of Land, Infrastructure, Transport and Tourism, Japanese Government. This paper describes (a) an Overview of the GO-FLOW methodology, (b) Procedure of treating a phased mission problem, (c) Common cause failure analysis, (d) Uncertainty analysis, and (e) Integrated analysis framework. The GO-FLOW methodology is a valuable and useful tool for system reliability analysis and has a wide range of applications. (author)

  15. Structure and sensitivity analysis of individual-based predator–prey models

    International Nuclear Information System (INIS)

    Imron, Muhammad Ali; Gergs, Andre; Berger, Uta

    2012-01-01

    The expensive computational cost of sensitivity analyses has hampered the use of these techniques for analysing individual-based models in ecology. A relatively cheap computational cost, referred to as the Morris method, was chosen to assess the relative effects of all parameters on the model’s outputs and to gain insights into predator–prey systems. Structure and results of the sensitivity analysis of the Sumatran tiger model – the Panthera Population Persistence (PPP) and the Notonecta foraging model (NFM) – were compared. Both models are based on a general predation cycle and designed to understand the mechanisms behind the predator–prey interaction being considered. However, the models differ significantly in their complexity and the details of the processes involved. In the sensitivity analysis, parameters that directly contribute to the number of prey items killed were found to be most influential. These were the growth rate of prey and the hunting radius of tigers in the PPP model as well as attack rate parameters and encounter distance of backswimmers in the NFM model. Analysis of distances in both of the models revealed further similarities in the sensitivity of the two individual-based models. The findings highlight the applicability and importance of sensitivity analyses in general, and screening design methods in particular, during early development of ecological individual-based models. Comparison of model structures and sensitivity analyses provides a first step for the derivation of general rules in the design of predator–prey models for both practical conservation and conceptual understanding. - Highlights: ► Structure of predation processes is similar in tiger and backswimmer model. ► The two individual-based models (IBM) differ in space formulations. ► In both models foraging distance is among the sensitive parameters. ► Morris method is applicable for the sensitivity analysis even of complex IBMs.

  16. Evolution of - and Core-Dominated Lava Flows Using Scaling Analysis

    Science.gov (United States)

    Castruccio, A.; Rust, A.; Sparks, R. S.

    2010-12-01

    We investigated the front evolution of simple lava flows on a slope using scaling arguments. For the retarding force acting against gravity, we analyzed three different cases: a flow controlled by a Newtonian viscosity, a flow controlled by the yield strength of a diffusively growing crust and a flow controlled by its core yield strength. These models were tested using previously published data of front evolution and volume discharge of 10 lava flow eruptions from 6 different volcanoes. Our analysis suggests that for basaltic eruptions with high effusion rate and low crystal content, (Hawaiian eruptions), the best fit of the data is with a Newtonian viscosity. For basaltic eruptions with lower effusion rates (Etna eruptions) or long duration andesitic eruptions (Lonquimay eruption, Chile) the flow is controlled by the yield strength of a growing crust. Finally, for very high crystalline lavas (Colima, Santiaguito) the flow is controlled by its core yield strength. The order of magnitude of the viscosities from our analysis is in the same range as previous studies using field measurements on the same lavas. The yield strength values for the growing crust and core of the flow are similar and with an order of magnitude of 10^5 Pa. This number is similar to yield strength values found in lava domes by different authors. The consistency of yield strength ~10^5 Pa is because larger stresses cause fracturing of very crystalline magma, which drastically reduces its effective strength. Furthermore, we used a 2-D analysis of a Bingham fluid flow on a slope to conclude that, for lower yield strength values, the flow is controlled mainly by its plastic viscosity and the lava can be effectively modelled as Newtonian. Our analysis provides a simple tool to evaluate the main controlling forces in the evolution of a lava flow, as well as the magnitude of its rheological properties, for eruptions of different compositions and conditions and may be useful to predict the evolution of

  17. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  18. The Volatility of Data Space: Topology Oriented Sensitivity Analysis

    Science.gov (United States)

    Du, Jing; Ligmann-Zielinska, Arika

    2015-01-01

    Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929

  19. Large scale applicability of a Fully Adaptive Non-Intrusive Spectral Projection technique: Sensitivity and uncertainty analysis of a transient

    International Nuclear Information System (INIS)

    Perkó, Zoltán; Lathouwers, Danny; Kloosterman, Jan Leen; Hagen, Tim van der

    2014-01-01

    Highlights: • Grid and basis adaptive Polynomial Chaos techniques are presented for S and U analysis. • Dimensionality reduction and incremental polynomial order reduce computational costs. • An unprotected loss of flow transient is investigated in a Gas Cooled Fast Reactor. • S and U analysis is performed with MC and adaptive PC methods, for 42 input parameters. • PC accurately estimates means, variances, PDFs, sensitivities and uncertainties. - Abstract: Since the early years of reactor physics the most prominent sensitivity and uncertainty (S and U) analysis methods in the nuclear community have been adjoint based techniques. While these are very effective for pure neutronics problems due to the linearity of the transport equation, they become complicated when coupled non-linear systems are involved. With the continuous increase in computational power such complicated multi-physics problems are becoming progressively tractable, hence affordable and easily applicable S and U analysis tools also have to be developed in parallel. For reactor physics problems for which adjoint methods are prohibitive Polynomial Chaos (PC) techniques offer an attractive alternative to traditional random sampling based approaches. At TU Delft such PC methods have been studied for a number of years and this paper presents a large scale application of our Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm for performing the sensitivity and uncertainty analysis of a Gas Cooled Fast Reactor (GFR) Unprotected Loss Of Flow (ULOF) transient. The transient was simulated using the Cathare 2 code system and a fully detailed model of the GFR2400 reactor design that was investigated in the European FP7 GoFastR project. Several sources of uncertainty were taken into account amounting to an unusually high number of stochastic input parameters (42) and numerous output quantities were investigated. The results show consistently good performance of the applied adaptive PC

  20. Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks

    Directory of Open Access Journals (Sweden)

    Harry R. Millwater

    2006-01-01

    Full Text Available A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel and common damage mechanisms (inherent defects or surface damage can be considered. The derivation is developed for Monte Carlo sampling such that the existing failure samples are used and the sensitivities are obtained with minimal additional computational time. Variance estimates and confidence bounds of the sensitivity estimates are developed. The methodology is demonstrated and verified using a multizone probabilistic fatigue analysis of a gas turbine compressor disk analysis considering stress scatter, crack growth propagation scatter, and initial crack size as random variables.

  1. Application of sensitivity analysis for optimized piping support design

    International Nuclear Information System (INIS)

    Tai, K.; Nakatogawa, T.; Hisada, T.; Noguchi, H.; Ichihashi, I.; Ogo, H.

    1993-01-01

    The objective of this study was to see if recent developments in non-linear sensitivity analysis could be applied to the design of nuclear piping systems which use non-linear supports and to develop a practical method of designing such piping systems. In the study presented in this paper, the seismic response of a typical piping system was analyzed using a dynamic non-linear FEM and a sensitivity analysis was carried out. Then optimization for the design of the piping system supports was investigated, selecting the support location and yield load of the non-linear supports (bi-linear model) as main design parameters. It was concluded that the optimized design was a matter of combining overall system reliability with the achievement of an efficient damping effect from the non-linear supports. The analysis also demonstrated sensitivity factors are useful in the planning stage of support design. (author)

  2. Least squares shadowing sensitivity analysis of a modified Kuramoto–Sivashinsky equation

    International Nuclear Information System (INIS)

    Blonigan, Patrick J.; Wang, Qiqi

    2014-01-01

    Highlights: •Modifying the Kuramoto–Sivashinsky equation and changing its boundary conditions make it an ergodic dynamical system. •The modified Kuramoto–Sivashinsky equation exhibits distinct dynamics for three different ranges of system parameters. •Least squares shadowing sensitivity analysis computes accurate gradients for a wide range of system parameters. - Abstract: Computational methods for sensitivity analysis are invaluable tools for scientists and engineers investigating a wide range of physical phenomena. However, many of these methods fail when applied to chaotic systems, such as the Kuramoto–Sivashinsky (K–S) equation, which models a number of different chaotic systems found in nature. The following paper discusses the application of a new sensitivity analysis method developed by the authors to a modified K–S equation. We find that least squares shadowing sensitivity analysis computes accurate gradients for solutions corresponding to a wide range of system parameters

  3. Sensitivity analysis explains quasi-one-dimensional current transport in two-dimensional materials

    DEFF Research Database (Denmark)

    Boll, Mads; Lotz, Mikkel Rønne; Hansen, Ole

    2014-01-01

    We demonstrate that the quasi-one-dimensional (1D) current transport, experimentally observed in graphene as measured by a collinear four-point probe in two electrode configurations A and B, can be interpreted using the sensitivity functions of the two electrode configurations (configurations...... A and B represents different pairs of electrodes chosen for current sources and potential measurements). The measured sheet resistance in a four-point probe measurement is averaged over an area determined by the sensitivity function. For a two-dimensional conductor, the sensitivity functions for electrode...... configurations A and B are different. But when the current is forced to flow through a percolation network, e.g., graphene with high density of extended defects, the two sensitivity functions become identical. This is equivalent to a four-point measurement on a line resistor, hence quasi-1D transport...

  4. Sensitivity technologies for large scale simulation

    International Nuclear Information System (INIS)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  5. Sensitivity studies of unsaturated groundwater flow modeling for groundwater travel time calculations at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Altman, S.J.; Ho, C.K.; Arnold, B.W.; McKenna, S.A.

    1995-01-01

    Unsaturated flow has been modeled through four cross-sections at Yucca Mountain, Nevada, for the purpose of determining groundwater particle travel times from the potential repository to the water table. This work will be combined with the results of flow modeling in the saturated zone for the purpose of evaluating the suitability of the potential repository under the criteria of 10CFR960. One criterion states, in part, that the groundwater travel time (GWTT) from the repository to the accessible environment must exceed 1,000 years along the fastest path of likely and significant radionuclide travel. Sensitivity analyses have been conducted for one geostatistical realization of one cross-section for the purpose of (1) evaluating the importance of hydrological parameters having some uncertainty and (2) examining conceptual models of flow by altering the numerical implementation of the conceptual model (dual permeability (DK) and the equivalent continuum model (ECM). Results of comparisons of the ECM and DK model are also presented in Ho et al

  6. Phase sensitive spectral domain interferometry for label free biomolecular interaction analysis and biosensing applications

    Science.gov (United States)

    Chirvi, Sajal

    Biomolecular interaction analysis (BIA) plays vital role in wide variety of fields, which include biomedical research, pharmaceutical industry, medical diagnostics, and biotechnology industry. Study and quantification of interactions between natural biomolecules (proteins, enzymes, DNA) and artificially synthesized molecules (drugs) is routinely done using various labeled and label-free BIA techniques. Labeled BIA (Chemiluminescence, Fluorescence, Radioactive) techniques suffer from steric hindrance of labels on interaction site, difficulty of attaching labels to molecules, higher cost and time of assay development. Label free techniques with real time detection capabilities have demonstrated advantages over traditional labeled techniques. The gold standard for label free BIA is surface Plasmon resonance (SPR) that detects and quantifies the changes in refractive index of the ligand-analyte complex molecule with high sensitivity. Although SPR is a highly sensitive BIA technique, it requires custom-made sensor chips and is not well suited for highly multiplexed BIA required in high throughput applications. Moreover implementation of SPR on various biosensing platforms is limited. In this research work spectral domain phase sensitive interferometry (SD-PSI) has been developed for label-free BIA and biosensing applications to address limitations of SPR and other label free techniques. One distinct advantage of SD-PSI compared to other label-free techniques is that it does not require use of custom fabricated biosensor substrates. Laboratory grade, off-the-shelf glass or plastic substrates of suitable thickness with proper surface functionalization are used as biosensor chips. SD-PSI is tested on four separate BIA and biosensing platforms, which include multi-well plate, flow cell, fiber probe with integrated optics and fiber tip biosensor. Sensitivity of 33 ng/ml for anti-IgG is achieved using multi-well platform. Principle of coherence multiplexing for multi

  7. Analysis of flow induced valve operation and pressure wave propagation for single and two-phase flow conditions

    International Nuclear Information System (INIS)

    Nagel, H.

    1986-01-01

    The flow induced valve operation is calculated for single and two-phase flow conditions by the fluid dynamic computer code DYVRO and results are compared to experimental data. The analysis show that the operational behaviour of the valves is not only dependent on the condition of the induced flow, but also the pipe flow can cause a feedback as a result of the induced pressure waves. For the calculation of pressure wave propagation in pipes of which the operation of flow induced valves has a considerable influence it is therefore necessary to have a coupled analysis of the pressure wave propagation and the operational behaviour of the valves. The analyses of the fast transient transfer from steam to two-phase flow show a good agreement with experimental data. Hence even these very high loads on pipes resulting from such fluid dynamic transients can be calculated realistically. (orig.)

  8. Analysis of bubbly flow using particle image velocimetry

    Energy Technology Data Exchange (ETDEWEB)

    Todd, D.R.; Ortiz-Villafuerte, J.; Schmidl, W.D.; Hassan, Y.A. [Texas A and M University, Nuclear Engineering Dept., College Stagion, TX (United States); Sanchez-Silva, F. [ESIME, INP (Mexico)

    2001-07-01

    The local phasic velocities can be determined in two-phase flows if the phases can be separated during analysis. The continuous liquid velocity field can be captured using standard Particle Image Velocimetry (PIV) techniques in two-phase flows. PIV is now a well-established, standard flow measurement technique, which provides instantaneous velocity fields in a two-dimensional plane of finite thickness. PIV can be extended to three dimensions within the plane with special considerations. A three-dimensional shadow PIV (SPIV) measurement apparatus can be used to capture the dispersed phase flow parameters such as velocity and interfacial area. The SPIV images contain only the bubble images, and can be easily analyzed and the results used to separate the dispersed phase from the continuous phase in PIV data. An experimental system that combines the traditional PIV technique with SPIV will be described and sample data will be analyzed to demonstrate an advanced turbulence measurement method in a two-phase bubbly flow system. Also, a qualitative error analysis method that allows users to reduce the number of erroneous vectors obtained from the PIV measurements will be discussed. (authors)

  9. Analysis of bubbly flow using particle image velocimetry

    International Nuclear Information System (INIS)

    Todd, D.R.; Ortiz-Villafuerte, J.; Schmidl, W.D.; Hassan, Y.A.; Sanchez-Silva, F.

    2001-01-01

    The local phasic velocities can be determined in two-phase flows if the phases can be separated during analysis. The continuous liquid velocity field can be captured using standard Particle Image Velocimetry (PIV) techniques in two-phase flows. PIV is now a well-established, standard flow measurement technique, which provides instantaneous velocity fields in a two-dimensional plane of finite thickness. PIV can be extended to three dimensions within the plane with special considerations. A three-dimensional shadow PIV (SPIV) measurement apparatus can be used to capture the dispersed phase flow parameters such as velocity and interfacial area. The SPIV images contain only the bubble images, and can be easily analyzed and the results used to separate the dispersed phase from the continuous phase in PIV data. An experimental system that combines the traditional PIV technique with SPIV will be described and sample data will be analyzed to demonstrate an advanced turbulence measurement method in a two-phase bubbly flow system. Also, a qualitative error analysis method that allows users to reduce the number of erroneous vectors obtained from the PIV measurements will be discussed. (authors)

  10. Sensitivity analysis of the nuclear data for MYRRHA reactor modelling

    International Nuclear Information System (INIS)

    Stankovskiy, Alexey; Van den Eynde, Gert; Cabellos, Oscar; Diez, Carlos J.; Schillebeeckx, Peter; Heyse, Jan

    2014-01-01

    A global sensitivity analysis of effective neutron multiplication factor k eff to the change of nuclear data library revealed that JEFF-3.2T2 neutron-induced evaluated data library produces closer results to ENDF/B-VII.1 than does JEFF-3.1.2. The analysis of contributions of individual evaluations into k eff sensitivity allowed establishing the priority list of nuclides for which uncertainties on nuclear data must be improved. Detailed sensitivity analysis has been performed for two nuclides from this list, 56 Fe and 238 Pu. The analysis was based on a detailed survey of the evaluations and experimental data. To track the origin of the differences in the evaluations and their impact on k eff , the reaction cross-sections and multiplicities in one evaluation have been substituted by the corresponding data from other evaluations. (authors)

  11. Drifting while stepping in place in old adults: Association of self-motion perception with reference frame reliance and ground optic flow sensitivity.

    Science.gov (United States)

    Agathos, Catherine P; Bernardin, Delphine; Baranton, Konogan; Assaiante, Christine; Isableu, Brice

    2017-04-07

    Optic flow provides visual self-motion information and is shown to modulate gait and provoke postural reactions. We have previously reported an increased reliance on the visual, as opposed to the somatosensory-based egocentric, frame of reference (FoR) for spatial orientation with age. In this study, we evaluated FoR reliance for self-motion perception with respect to the ground surface. We examined how effects of ground optic flow direction on posture may be enhanced by an intermittent podal contact with the ground, and reliance on the visual FoR and aging. Young, middle-aged and old adults stood quietly (QS) or stepped in place (SIP) for 30s under static stimulation, approaching and receding optic flow on the ground and a control condition. We calculated center of pressure (COP) translation and optic flow sensitivity was defined as the ratio of COP translation velocity over absolute optic flow velocity: the visual self-motion quotient (VSQ). COP translation was more influenced by receding flow during QS and by approaching flow during SIP. In addition, old adults drifted forward while SIP without any imposed visual stimulation. Approaching flow limited this natural drift and receding flow enhanced it, as indicated by the VSQ. The VSQ appears to be a motor index of reliance on the visual FoR during SIP and is associated with greater reliance on the visual and reduced reliance on the egocentric FoR. Exploitation of the egocentric FoR for self-motion perception with respect to the ground surface is compromised by age and associated with greater sensitivity to optic flow. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. OpenFlow Deployment and Concept Analysis

    Directory of Open Access Journals (Sweden)

    Tomas Hegr

    2013-01-01

    Full Text Available Terms such as SDN and OpenFlow (OF are often used in the research and development of data networks. This paper deals with the analysis of the current state of OpenFlow protocol deployment options as it is the only real representative protocol that enables the implementation of Software Defined Networking outside an academic world. There is introduced an insight into the current state of the OpenFlow specification development at various levels is introduced. The possible limitations associated with this concept in conjunction with the latest version (1.3 of the specification published by ONF are also presented. In the conclusion there presented a demonstrative security application addressing the lack of IPv6 support in real network devices since most of today's switches and controllers support only OF v1.0.

  13. Understanding consumption-related sucralose emissions - A conceptual approach combining substance-flow analysis with sampling analysis

    Energy Technology Data Exchange (ETDEWEB)

    Neset, Tina-Simone Schmid, E-mail: tina.schmid.neset@liu.se [Department of Water and Environmental Studies, Linkoeping University, SE-58183 Linkoeping (Sweden); Singer, Heinz; Longree, Philipp; Bader, Hans-Peter; Scheidegger, Ruth; Wittmer, Anita; Andersson, Jafet Clas Martin [Eawag, Swiss Federal Institute of Aquatic Science and Technology, Ueberlandstrasse 133, CH-8600 Duebendorf (Switzerland)

    2010-07-15

    This paper explores the potential of combining substance-flow modelling with water and wastewater sampling to trace consumption-related substances emitted through the urban wastewater. The method is exemplified on sucralose. Sucralose is a chemical sweetener that is 600 times sweeter than sucrose and has been on the European market since 2004. As a food additive, sucralose has recently increased in usage in a number of foods, such as soft drinks, dairy products, candy and several dietary products. In a field campaign, sucralose concentrations were measured in the inflow and outflow of the local wastewater treatment plant in Linkoeping, Sweden, as well as upstream and downstream of the receiving stream and in Lake Roxen. This allows the loads emitted from the city to be estimated. A method consisting of solid-phase extraction followed by liquid chromatography and high resolution mass spectrometry was used to quantify the sucralose in the collected surface and wastewater samples. To identify and quantify the sucralose sources, a consumption analysis of households including small business enterprises was conducted as well as an estimation of the emissions from the local food industry. The application of a simple model including uncertainty and sensitivity analysis indicates that at present not one large source but rather several small sources contribute to the load coming from households, small business enterprises and industry. This is in contrast to the consumption pattern seen two years earlier, which was dominated by one product. The inflow to the wastewater treatment plant decreased significantly from other measurements made two years earlier. The study shows that the combination of substance-flow modelling with the analysis of the loads to the receiving waters helps us to understand consumption-related emissions.

  14. Understanding consumption-related sucralose emissions - A conceptual approach combining substance-flow analysis with sampling analysis

    International Nuclear Information System (INIS)

    Neset, Tina-Simone Schmid; Singer, Heinz; Longree, Philipp; Bader, Hans-Peter; Scheidegger, Ruth; Wittmer, Anita; Andersson, Jafet Clas Martin

    2010-01-01

    This paper explores the potential of combining substance-flow modelling with water and wastewater sampling to trace consumption-related substances emitted through the urban wastewater. The method is exemplified on sucralose. Sucralose is a chemical sweetener that is 600 times sweeter than sucrose and has been on the European market since 2004. As a food additive, sucralose has recently increased in usage in a number of foods, such as soft drinks, dairy products, candy and several dietary products. In a field campaign, sucralose concentrations were measured in the inflow and outflow of the local wastewater treatment plant in Linkoeping, Sweden, as well as upstream and downstream of the receiving stream and in Lake Roxen. This allows the loads emitted from the city to be estimated. A method consisting of solid-phase extraction followed by liquid chromatography and high resolution mass spectrometry was used to quantify the sucralose in the collected surface and wastewater samples. To identify and quantify the sucralose sources, a consumption analysis of households including small business enterprises was conducted as well as an estimation of the emissions from the local food industry. The application of a simple model including uncertainty and sensitivity analysis indicates that at present not one large source but rather several small sources contribute to the load coming from households, small business enterprises and industry. This is in contrast to the consumption pattern seen two years earlier, which was dominated by one product. The inflow to the wastewater treatment plant decreased significantly from other measurements made two years earlier. The study shows that the combination of substance-flow modelling with the analysis of the loads to the receiving waters helps us to understand consumption-related emissions.

  15. An introduction to sensitivity analysis for unobserved confounding in nonexperimental prevention research.

    Science.gov (United States)

    Liu, Weiwei; Kuramoto, S Janet; Stuart, Elizabeth A

    2013-12-01

    Despite the fact that randomization is the gold standard for estimating causal relationships, many questions in prevention science are often left to be answered through nonexperimental studies because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most nonexperimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example, we examine the sensitivity of the association between maternal suicide and offspring's risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall, the association between maternal suicide and offspring's hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for nonexperimental studies. The implementation of sensitivity analysis can help increase confidence in results from nonexperimental studies and better inform prevention researchers and policy makers regarding potential intervention targets.

  16. Fuel bundle impact velocities due to reverse flow

    International Nuclear Information System (INIS)

    Wahba, N.N.; Locke, K.E.

    1996-01-01

    If a break should occur in the inlet feeder or inlet header of a CANDU reactor, the rapid depressurization will cause the channel flow(s) to reverse. Depending on the gap between the upstream bundle and shield plug, the string of bundles will accelerate in the reverse direction and impact with the upstream shield plug. The reverse flow impact velocities have been calculated for various operating states for the Bruce NGS A reactors. The sensitivity to several analysis assumptions has been determined. (author)

  17. Sensitivity analysis of numerical solutions for environmental fluid problems

    International Nuclear Information System (INIS)

    Tanaka, Nobuatsu; Motoyama, Yasunori

    2003-01-01

    In this study, we present a new numerical method to quantitatively analyze the error of numerical solutions by using the sensitivity analysis. If a reference case of typical parameters is one calculated with the method, no additional calculation is required to estimate the results of the other numerical parameters such as more detailed solutions. Furthermore, we can estimate the strict solution from the sensitivity analysis results and can quantitatively evaluate the reliability of the numerical solution by calculating the numerical error. (author)

  18. A mixture theory model of fluid and solute transport in the microvasculature of normal and malignant tissues. II: Factor sensitivity analysis, calibration, and validation.

    Science.gov (United States)

    Schuff, M M; Gore, J P; Nauman, E A

    2013-12-01

    The treatment of cancerous tumors is dependent upon the delivery of therapeutics through the blood by means of the microcirculation. Differences in the vasculature of normal and malignant tissues have been recognized, but it is not fully understood how these differences affect transport and the applicability of existing mathematical models has been questioned at the microscale due to the complex rheology of blood and fluid exchange with the tissue. In addition to determining an appropriate set of governing equations it is necessary to specify appropriate model parameters based on physiological data. To this end, a two stage sensitivity analysis is described which makes it possible to determine the set of parameters most important to the model's calibration. In the first stage, the fluid flow equations are examined and a sensitivity analysis is used to evaluate the importance of 11 different model parameters. Of these, only four substantially influence the intravascular axial flow providing a tractable set that could be calibrated using red blood cell velocity data from the literature. The second stage also utilizes a sensitivity analysis to evaluate the importance of 14 model parameters on extravascular flux. Of these, six exhibit high sensitivity and are integrated into the model calibration using a response surface methodology and experimental intra- and extravascular accumulation data from the literature (Dreher et al. in J Natl Cancer Inst 98(5):335-344, 2006). The model exhibits good agreement with the experimental results for both the mean extravascular concentration and the penetration depth as a function of time for inert dextran over a wide range of molecular weights.

  19. Low-order modelling of shallow water equations for sensitivity analysis using proper orthogonal decomposition

    Science.gov (United States)

    Zokagoa, Jean-Marie; Soulaïmani, Azzeddine

    2012-06-01

    This article presents a reduced-order model (ROM) of the shallow water equations (SWEs) for use in sensitivity analyses and Monte-Carlo type applications. Since, in the real world, some of the physical parameters and initial conditions embedded in free-surface flow problems are difficult to calibrate accurately in practice, the results from numerical hydraulic models are almost always corrupted with uncertainties. The main objective of this work is to derive a ROM that ensures appreciable accuracy and a considerable acceleration in the calculations so that it can be used as a surrogate model for stochastic and sensitivity analyses in real free-surface flow problems. The ROM is derived using the proper orthogonal decomposition (POD) method coupled with Galerkin projections of the SWEs, which are discretised through a finite-volume method. The main difficulty of deriving an efficient ROM is the treatment of the nonlinearities involved in SWEs. Suitable approximations that provide rapid online computations of the nonlinear terms are proposed. The proposed ROM is applied to the simulation of hypothetical flood flows in the Bordeaux breakwater, a portion of the 'Rivière des Prairies' located near Laval (a suburb of Montreal, Quebec). A series of sensitivity analyses are performed by varying the Manning roughness coefficient and the inflow discharge. The results are satisfactorily compared to those obtained by the full-order finite volume model.

  20. The application of sensitivity analysis to models of large scale physiological systems

    Science.gov (United States)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  1. TEMAC, Top Event Sensitivity Analysis

    International Nuclear Information System (INIS)

    Iman, R.L.; Shortencarier, M.J.

    1988-01-01

    1 - Description of program or function: TEMAC is designed to permit the user to easily estimate risk and to perform sensitivity and uncertainty analyses with a Boolean expression such as produced by the SETS computer program. SETS produces a mathematical representation of a fault tree used to model system unavailability. In the terminology of the TEMAC program, such a mathematical representation is referred to as a top event. The analysis of risk involves the estimation of the magnitude of risk, the sensitivity of risk estimates to base event probabilities and initiating event frequencies, and the quantification of the uncertainty in the risk estimates. 2 - Method of solution: Sensitivity and uncertainty analyses associated with top events involve mathematical operations on the corresponding Boolean expression for the top event, as well as repeated evaluations of the top event in a Monte Carlo fashion. TEMAC employs a general matrix approach which provides a convenient general form for Boolean expressions, is computationally efficient, and allows large problems to be analyzed. 3 - Restrictions on the complexity of the problem - Maxima of: 4000 cut sets, 500 events, 500 values in a Monte Carlo sample, 16 characters in an event name. These restrictions are implemented through the FORTRAN 77 PARAMATER statement

  2. Sensitivity and uncertainty analysis applied to a repository in rock salt

    International Nuclear Information System (INIS)

    Polle, A.N.

    1996-12-01

    This document describes the sensitivity and uncertainty analysis with UNCSAM, as applied to a repository in rock salt for the EVEREST project. UNCSAM is a dedicated software package for sensitivity and uncertainty analysis, which was already used within the preceding PROSA project. The use of UNCSAM provides a flexible interface to EMOS ECN by substituting the sampled values in the various input files to be used by EMOS ECN ; the model calculations for this repository were performed with the EMOS ECN code. Preceding the sensitivity and uncertainty analysis, a number of preparations has been carried out to facilitate EMOS ECN with the probabilistic input data. For post-processing the EMOS ECN results, the characteristic output signals were processed. For the sensitivity and uncertainty analysis with UNCSAM the stochastic input, i.e. sampled values, and the output for the various EMOS ECN runs have been analyzed. (orig.)

  3. Groundwater flow analysis on local scale. Setting boundary conditions for groundwater flow analysis on site scale model in step 1

    International Nuclear Information System (INIS)

    Ohyama, Takuya; Saegusa, Hiromitsu; Onoe, Hironori

    2005-05-01

    Japan Nuclear Cycle Development Institute has been conducting a wide range of geoscientific research in order to build a foundation for multidisciplinary studies of the deep geological environment as a basis of research and development for geological disposal of nuclear wastes. Ongoing geoscientific research programs include the Regional Hydrogeological Study (RHS) project and Mizunami Underground Research Laboratory (MIU) project in the Tono region, Gifu Prefecture. The main goal of these projects is to establish comprehensive techniques for investigation, analysis, and assessment of the deep geological environment at several spatial scales. The RHS project is a local scale study for understanding the groundwater flow system from the recharge area to the discharge area. The surface-based Investigation Phase of the MIU project is a site scale study for understanding the groundwater flow system immediately surrounding the MIU construction site. The MIU project is being conducted using a multiphase, iterative approach. In this study, the hydrogeological modeling and groundwater flow analysis of the local scale were carried out in order to set boundary conditions of the site scale model based on the data obtained from surface-based investigations in Step 1 in site scale of the MIU project. As a result of the study, head distribution to set boundary conditions for groundwater flow analysis on the site scale model could be obtained. (author)

  4. Micronuclei frequency in circulating erythrocytes from rainbow trout (Oncorhynchus mykiss) subjected to radiation, an image analysis and flow cytometric study

    International Nuclear Information System (INIS)

    Schultz, N.; Norrgren, L.; Grawe, J.; Johannisson, A.; Medhage, O.

    1993-01-01

    Rainbow trout (oncorhynchus mykiss) were exposed to a single X-ray dose of 4 Gy. The frequency of micronuclei in the peripheral erythrocytes was investigated at regular intervals up to 58 days after the exposure. A flow cytometric method and a semi-automatic image analysis method were used to estimate the micronuclei frequency. The results show that both methods can detect an increased frequency of micronuclei in peripheral erythrocytes from exposed fish. However, the semi-automatic image analysis method was the most stable and sensitive. (Author)

  5. A framework for 2-stage global sensitivity analysis of GastroPlus™ compartmental models.

    Science.gov (United States)

    Scherholz, Megerle L; Forder, James; Androulakis, Ioannis P

    2018-04-01

    Parameter sensitivity and uncertainty analysis for physiologically based pharmacokinetic (PBPK) models are becoming an important consideration for regulatory submissions, requiring further evaluation to establish the need for global sensitivity analysis. To demonstrate the benefits of an extensive analysis, global sensitivity was implemented for the GastroPlus™ model, a well-known commercially available platform, using four example drugs: acetaminophen, risperidone, atenolol, and furosemide. The capabilities of GastroPlus were expanded by developing an integrated framework to automate the GastroPlus graphical user interface with AutoIt and for execution of the sensitivity analysis in MATLAB ® . Global sensitivity analysis was performed in two stages using the Morris method to screen over 50 parameters for significant factors followed by quantitative assessment of variability using Sobol's sensitivity analysis. The 2-staged approach significantly reduced computational cost for the larger model without sacrificing interpretation of model behavior, showing that the sensitivity results were well aligned with the biopharmaceutical classification system. Both methods detected nonlinearities and parameter interactions that would have otherwise been missed by local approaches. Future work includes further exploration of how the input domain influences the calculated global sensitivity measures as well as extending the framework to consider a whole-body PBPK model.

  6. Sensitivity analysis of dynamic characteristic of the fixture based on design variables

    International Nuclear Information System (INIS)

    Wang Dongsheng; Nong Shaoning; Zhang Sijian; Ren Wanfa

    2002-01-01

    The research on the sensitivity analysis is dealt with of structural natural frequencies to structural design parameters. A typical fixture for vibration test is designed. Using I-DEAS Finite Element programs, the sensitivity of its natural frequency to design parameters is analyzed by Matrix Perturbation Method. The research result shows that the sensitivity analysis is a fast and effective dynamic re-analysis method to dynamic design and parameters modification of complex structures such as fixtures

  7. Justification of investment projects of biogas systems by the sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Perebijnos Vasilij Ivanovich

    2015-06-01

    Full Text Available Methodical features of sensitivity analysis application for evaluation of biogas plants investment projects are shown in the article. Risk factors of the indicated investment projects have been studied. Methodical basis for the use of sensitivity analysis and calculation of elasticity coefficient has been worked out. Calculation of sensitivity analysis and elasticity coefficient of three biogas plants projects, which differ in direction of biogas transformation: use in co-generation plant, application of biomethane as motor fuel and resulting carbon dioxide as marketable product, has been made. Factors strongly affecting projects efficiency have been revealed.

  8. Unraveling the intricate dynamics of planktonic Arctic marine food webs. A sensitivity analysis of a well-documented food web model

    Science.gov (United States)

    Saint-Béat, Blanche; Maps, Frédéric; Babin, Marcel

    2018-01-01

    The extreme and variable environment shapes the functioning of Arctic ecosystems and the life cycles of its species. This delicate balance is now threatened by the unprecedented pace and magnitude of global climate change and anthropogenic pressure. Understanding the long-term consequences of these changes remains an elusive, yet pressing, goal. Our work was specifically aimed at identifying which biological processes impact Arctic planktonic ecosystem functioning, and how. Ecological Network Analysis (ENA) indices reveal emergent ecosystem properties that are not accessible through simple in situ observation. These indices are based on the architecture of carbon flows within food webs. But, despite the recent increase in in situ measurements from Arctic seas, many flow values remain unknown. Linear inverse modeling (LIM) allows missing flow values to be estimated from existing flow observations and, subsequent reconstruction of ecosystem food webs. Through a sensitivity analysis on a LIM model of the Amundsen Gulf in the Canadian Arctic, we were able to determine which processes affected the emergent properties of the planktonic ecosystem. The analysis highlighted the importance of an accurate knowledge of the various processes controlling bacterial production (e.g. bacterial growth efficiency and viral lysis). More importantly, a change in the fate of the microzooplankton within the food web can be monitored through the trophic level of mesozooplankton. It can be used as a "canary in the coal mine" signal, a forewarner of larger ecosystem change.

  9. The analysis of repository-heat-driven hydrothermal flow at Yucca Mountain

    International Nuclear Information System (INIS)

    Buscheck, T.A.; Nitao, J.J.

    1993-01-01

    To safely and permanently store high-level nuclear waste, the potential Yucca Mountain repository site must mitigate the release and transport of radionuclides for tens of thousands of years. In the failure scenario of greatest concern, water would contact the waste package (WP), accelerate its failure rate, and eventually transport radionuclides to the water table. In a concept called the ''extended-dry repository,'' decay heat arising from radioactive waste extends the time before liquid water can contact a WP. Recent modeling and theoretical advances in nonisothermal, multiphase fracture-matrix flow have demonstrated (1) the critical importance of capillary pressure disequilibrium between fracture and matrix flow, and (2) that radioactive decay heat plays a dominant role in the ability of the engineered and natural barriers to contain and isolate radionuclides. Our analyses indicate that the thermo-hydrological performance of both the unsaturated zone (UZ) and saturated zone (SZ) will be dominated by repository-heat-driven hydrothermal flow for tens of thousands of years. For thermal loads resulting in extended-dry repository conditions, UZ performance is primarily sensitive to the thermal properties and thermal loading conditions and much less sensitive to the highly spatially and temporally variable ambient hydrologic properties and conditions. The magnitude of repository-heat-driven buoyancy flow in the SZ is far more dependent on the total mass of emplaced spent nuclear fuel (SNF) than on the details of SNF emplacement, such as the Areal Power Density [(APD) expressed in kill/acre] or SNF age

  10. Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities

    Science.gov (United States)

    Esposito, Gaetano

    Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and

  11. *Corresponding Author Sensitivity Analysis of a Physiochemical ...

    African Journals Online (AJOL)

    Michael Horsfall

    The numerical method of sensitivity or the principle of parsimony ... analysis is a widely applied numerical method often being used in the .... Chemical Engineering Journal 128(2-3), 85-93. Amod S ... coupled 3-PG and soil organic matter.

  12. Uncertainty, sensitivity analysis and the role of data based mechanistic modeling in hydrology

    Science.gov (United States)

    Ratto, M.; Young, P. C.; Romanowicz, R.; Pappenberger, F.; Saltelli, A.; Pagano, A.

    2007-05-01

    In this paper, we discuss a joint approach to calibration and uncertainty estimation for hydrologic systems that combines a top-down, data-based mechanistic (DBM) modelling methodology; and a bottom-up, reductionist modelling methodology. The combined approach is applied to the modelling of the River Hodder catchment in North-West England. The top-down DBM model provides a well identified, statistically sound yet physically meaningful description of the rainfall-flow data, revealing important characteristics of the catchment-scale response, such as the nature of the effective rainfall nonlinearity and the partitioning of the effective rainfall into different flow pathways. These characteristics are defined inductively from the data without prior assumptions about the model structure, other than it is within the generic class of nonlinear differential-delay equations. The bottom-up modelling is developed using the TOPMODEL, whose structure is assumed a priori and is evaluated by global sensitivity analysis (GSA) in order to specify the most sensitive and important parameters. The subsequent exercises in calibration and validation, performed with Generalized Likelihood Uncertainty Estimation (GLUE), are carried out in the light of the GSA and DBM analyses. This allows for the pre-calibration of the the priors used for GLUE, in order to eliminate dynamical features of the TOPMODEL that have little effect on the model output and would be rejected at the structure identification phase of the DBM modelling analysis. In this way, the elements of meaningful subjectivity in the GLUE approach, which allow the modeler to interact in the modelling process by constraining the model to have a specific form prior to calibration, are combined with other more objective, data-based benchmarks for the final uncertainty estimation. GSA plays a major role in building a bridge between the hypothetico-deductive (bottom-up) and inductive (top-down) approaches and helps to improve the

  13. Sensitivity and Nonlinearity of Thermoacoustic Oscillations

    Science.gov (United States)

    Juniper, Matthew P.; Sujith, R. I.

    2018-01-01

    Nine decades of rocket engine and gas turbine development have shown that thermoacoustic oscillations are difficult to predict but can usually be eliminated with relatively small ad hoc design changes. These changes can, however, be ruinously expensive to devise. This review explains why linear and nonlinear thermoacoustic behavior is so sensitive to parameters such as operating point, fuel composition, and injector geometry. It shows how nonperiodic behavior arises in experiments and simulations and discusses how fluctuations in thermoacoustic systems with turbulent reacting flow, which are usually filtered or averaged out as noise, can reveal useful information. Finally, it proposes tools to exploit this sensitivity in the future: adjoint-based sensitivity analysis to optimize passive control designs and complex systems theory to warn of impending thermoacoustic oscillations and to identify the most sensitive elements of a thermoacoustic system.

  14. Sensitivity analysis of LOFT L2-5 test calculations

    International Nuclear Information System (INIS)

    Prosek, Andrej

    2014-01-01

    The uncertainty quantification of best-estimate code predictions is typically accompanied by a sensitivity analysis, in which the influence of the individual contributors to uncertainty is determined. The objective of this study is to demonstrate the improved fast Fourier transform based method by signal mirroring (FFTBM-SM) for the sensitivity analysis. The sensitivity study was performed for the LOFT L2-5 test, which simulates the large break loss of coolant accident. There were 14 participants in the BEMUSE (Best Estimate Methods-Uncertainty and Sensitivity Evaluation) programme, each performing a reference calculation and 15 sensitivity runs of the LOFT L2-5 test. The important input parameters varied were break area, gap conductivity, fuel conductivity, decay power etc. For the influence of input parameters on the calculated results the FFTBM-SM was used. The only difference between FFTBM-SM and original FFTBM is that in the FFTBM-SM the signals are symmetrized to eliminate the edge effect (the so called edge is the difference between the first and last data point of one period of the signal) in calculating average amplitude. It is very important to eliminate unphysical contribution to the average amplitude, which is used as a figure of merit for input parameter influence on output parameters. The idea is to use reference calculation as 'experimental signal', 'sensitivity run' as 'calculated signal', and average amplitude as figure of merit for sensitivity instead for code accuracy. The larger is the average amplitude the larger is the influence of varied input parameter. The results show that with FFTBM-SM the analyst can get good picture of the contribution of the parameter variation to the results. They show when the input parameters are influential and how big is this influence. FFTBM-SM could be also used to quantify the influence of several parameter variations on the results. However, the influential parameters could not be

  15. Flow injection analysis in inductively coupled plasma spectrometry

    International Nuclear Information System (INIS)

    Rosias, Maria F.G.G.

    1995-10-01

    The main features of flow injection analysis (FIA) as contribution to the inductively coupled plasma (Icp) spectrometry are described. A systematic review of researches using the combined FIA-Icp and the benefits of this association are presented. Flow systems were proposed to perform on-line Icp solution management for multielemental determination by atomic emission spectrometry (Icp-AES) or mass spectrometry. The inclusion of on-line ion exchangers in flow systems for matrix separation and/or analyte preconcentration are presented. Together with those applications the new advent of instruments with facilities for multielement detection on flow injection signals are described. (author). 75 refs., 19 figs

  16. Basic models in transitory analysis in biphasic flows

    International Nuclear Information System (INIS)

    Gonzalez S, J.M.

    1992-02-01

    The two-phase flow but studied and possibly the more complex, is the one integrated by gas-liquid mixtures. These flows are with frequency inside systems and equipment related with the chemical industry, that of the petroleum and in the one dedicated to the electric energy generation, being inside this last, in particular in the nuclear and of geothermal areas, those that but have motivated to the detailed and complete analysis of the behavior of the two-phase flows. The present report, it tries to analyze inside the nuclear reactor area, the emergence of some abnormal operation situations, related exclusively with the two-phase flow in gas-liquid mixtures. (Author)

  17. Transient flow analysis of integrated valve opening process

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Xinming; Qin, Benke; Bo, Hanliang, E-mail: bohl@tsinghua.edu.cn; Xu, Xingxing

    2017-03-15

    Highlights: • The control rod hydraulic driving system (CRHDS) is a new type of built-in control rod drive technology and the integrated valve (IV) is the key control component. • The transient flow experiment induced by IV is conducted and the test results are analyzed to get its working mechanism. • The theoretical model of IV opening process is established and applied to get the changing rule of the transient flow characteristic parameters. - Abstract: The control rod hydraulic driving system (CRHDS) is a new type of built-in control rod drive technology and the IV is the key control component. The working principle of integrated valve (IV) is analyzed and the IV hydraulic experiment is conducted. There is transient flow phenomenon in the valve opening process. The theoretical model of IV opening process is established by the loop system control equations and boundary conditions. The valve opening boundary condition equation is established based on the IV three dimensional flow field analysis results and the dynamic analysis of the valve core movement. The model calculation results are in good agreement with the experimental results. On this basis, the model is used to analyze the transient flow under high temperature condition. The peak pressure head is consistent with the one under room temperature and the pressure fluctuation period is longer than the one under room temperature. Furthermore, the changing rule of pressure transients with the fluid and loop structure parameters is analyzed. The peak pressure increases with the flow rate and the peak pressure decreases with the increase of the valve opening time. The pressure fluctuation period increases with the loop pipe length and the fluctuation amplitude remains largely unchanged under different equilibrium pressure conditions. The research results lay the base for the vibration reduction analysis of the CRHDS.

  18. Importance measures in global sensitivity analysis of nonlinear models

    International Nuclear Information System (INIS)

    Homma, Toshimitsu; Saltelli, Andrea

    1996-01-01

    The present paper deals with a new method of global sensitivity analysis of nonlinear models. This is based on a measure of importance to calculate the fractional contribution of the input parameters to the variance of the model prediction. Measures of importance in sensitivity analysis have been suggested by several authors, whose work is reviewed in this article. More emphasis is given to the developments of sensitivity indices by the Russian mathematician I.M. Sobol'. Given that Sobol' treatment of the measure of importance is the most general, his formalism is employed throughout this paper where conceptual and computational improvements of the method are presented. The computational novelty of this study is the introduction of the 'total effect' parameter index. This index provides a measure of the total effect of a given parameter, including all the possible synergetic terms between that parameter and all the others. Rank transformation of the data is also introduced in order to increase the reproducibility of the method. These methods are tested on a few analytical and computer models. The main conclusion of this work is the identification of a sensitivity analysis methodology which is both flexible, accurate and informative, and which can be achieved at reasonable computational cost

  19. Improving left ventricular segmentation in four-dimensional flow MRI using intramodality image registration for cardiac blood flow analysis.

    Science.gov (United States)

    Gupta, Vikas; Bustamante, Mariana; Fredriksson, Alexandru; Carlhäll, Carl-Johan; Ebbers, Tino

    2018-01-01

    Assessment of blood flow in the left ventricle using four-dimensional flow MRI requires accurate left ventricle segmentation that is often hampered by the low contrast between blood and the myocardium. The purpose of this work is to improve left-ventricular segmentation in four-dimensional flow MRI for reliable blood flow analysis. The left ventricle segmentations are first obtained using morphological cine-MRI with better in-plane resolution and contrast, and then aligned to four-dimensional flow MRI data. This alignment is, however, not trivial due to inter-slice misalignment errors caused by patient motion and respiratory drift during breath-hold based cine-MRI acquisition. A robust image registration based framework is proposed to mitigate such errors automatically. Data from 20 subjects, including healthy volunteers and patients, was used to evaluate its geometric accuracy and impact on blood flow analysis. High spatial correspondence was observed between manually and automatically aligned segmentations, and the improvements in alignment compared to uncorrected segmentations were significant (P  0.05). Our results demonstrate the efficacy of the proposed approach in improving left-ventricular segmentation in four-dimensional flow MRI, and its potential for reliable blood flow analysis. Magn Reson Med 79:554-560, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    Science.gov (United States)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  1. Adjoint sensitivity analysis of high frequency structures with Matlab

    CERN Document Server

    Bakr, Mohamed; Demir, Veysel

    2017-01-01

    This book covers the theory of adjoint sensitivity analysis and uses the popular FDTD (finite-difference time-domain) method to show how wideband sensitivities can be efficiently estimated for different types of materials and structures. It includes a variety of MATLAB® examples to help readers absorb the content more easily.

  2. System reliability assessment via sensitivity analysis in the Markov chain scheme

    International Nuclear Information System (INIS)

    Gandini, A.

    1988-01-01

    Methods for reliability sensitivity analysis in the Markov chain scheme are presented, together with a new formulation which makes use of Generalized Perturbation Theory (GPT) methods. As well known, sensitivity methods are fundamental in system risk analysis, since they allow to identify important components, so to assist the analyst in finding weaknesses in design and operation and in suggesting optimal modifications for system upgrade. The relationship between the GPT sensitivity expression and the Birnbaum importance is also given [fr

  3. Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model

    International Nuclear Information System (INIS)

    Otis, M.D.

    1983-01-01

    Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs

  4. Interactive Building Design Space Exploration Using Regionalized Sensitivity Analysis

    DEFF Research Database (Denmark)

    Østergård, Torben; Jensen, Rasmus Lund; Maagaard, Steffen

    2017-01-01

    simulation inputs are most important and which have negligible influence on the model output. Popular sensitivity methods include the Morris method, variance-based methods (e.g. Sobol’s), and regression methods (e.g. SRC). However, all these methods only address one output at a time, which makes it difficult...... in combination with the interactive parallel coordinate plot (PCP). The latter is an effective tool to explore stochastic simulations and to find high-performing building designs. The proposed methods help decision makers to focus their attention to the most important design parameters when exploring......Monte Carlo simulations combined with regionalized sensitivity analysis provide the means to explore a vast, multivariate design space in building design. Typically, sensitivity analysis shows how the variability of model output relates to the uncertainties in models inputs. This reveals which...

  5. Introduction of thermal-hydraulic analysis code and system analysis code for HTGR

    International Nuclear Information System (INIS)

    Tanaka, Mitsuhiro; Izaki, Makoto; Koike, Hiroyuki; Tokumitsu, Masashi

    1984-01-01

    Kawasaki Heavy Industries Ltd. has advanced the development and systematization of analysis codes, aiming at lining up the analysis codes for heat transferring flow and control characteristics, taking up HTGR plants as the main object. In order to make the model of flow when shock waves propagate to heating tubes, SALE-3D which can analyze a complex system was developed, therefore, it is reported in this paper. Concerning the analysis code for control characteristics, the method of sensitivity analysis in a topological space including an example of application is reported. The flow analysis code SALE-3D is that for analyzing the flow of compressible viscous fluid in a three-dimensional system over the velocity range from incompressibility limit to supersonic velocity. The fundamental equations and fundamental algorithm of the SALE-3D, the calculation of cell volume, the plotting of perspective drawings and the analysis of the three-dimensional behavior of shock waves propagating in heating tubes after their rupture accident are described. The method of sensitivity analysis was added to the analysis code for control characteristics in a topological space, and blow-down phenomena was analyzed by its application. (Kako, I.)

  6. Sensitivity analysis of reactive ecological dynamics.

    Science.gov (United States)

    Verdy, Ariane; Caswell, Hal

    2008-08-01

    Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.

  7. Gas-water two-phase flow characterization with Electrical Resistance Tomography and Multivariate Multiscale Entropy analysis.

    Science.gov (United States)

    Tan, Chao; Zhao, Jia; Dong, Feng

    2015-03-01

    Flow behavior characterization is important to understand gas-liquid two-phase flow mechanics and further establish its description model. An Electrical Resistance Tomography (ERT) provides information regarding flow conditions at different directions where the sensing electrodes implemented. We extracted the multivariate sample entropy (MSampEn) by treating ERT data as a multivariate time series. The dynamic experimental results indicate that the MSampEn is sensitive to complexity change of flow patterns including bubbly flow, stratified flow, plug flow and slug flow. MSampEn can characterize the flow behavior at different direction of two-phase flow, and reveal the transition between flow patterns when flow velocity changes. The proposed method is effective to analyze two-phase flow pattern transition by incorporating information of different scales and different spatial directions. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  8. A Lateral Flow Protein Microarray for Rapid and Sensitive Antibody Assays

    Directory of Open Access Journals (Sweden)

    Helene Andersson-Svahn

    2011-11-01

    Full Text Available Protein microarrays are useful tools for highly multiplexed determination of presence or levels of clinically relevant biomarkers in human tissues and biofluids. However, such tools have thus far been restricted to laboratory environments. Here, we present a novel 384-plexed easy to use lateral flow protein microarray device capable of sensitive (< 30 ng/mL determination of antigen-specific antibodies in ten minutes of total assay time. Results were developed with gold nanobeads and could be recorded by a cell-phone camera or table top scanner. Excellent accuracy with an area under curve (AUC of 98% was achieved in comparison with an established glass microarray assay for 26 antigen-specific antibodies. We propose that the presented framework could find use in convenient and cost-efficient quality control of antibody production, as well as in providing a platform for multiplexed affinity-based assays in low-resource or mobile settings.

  9. First status report on regional ground-water flow modeling for the Paradox Basin, Utah

    International Nuclear Information System (INIS)

    Andrews, R.W.

    1984-05-01

    Regional ground-water flow within the principal hydrogeologic units of the Paradox Basin is evaluated by developing a conceptual model of the flow regime in the shallow aquifers and the deep-basin brine aquifers and testing these models using a three-dimensional, finite-difference flow code. Semiquantitative sensitivity analysis (a limited parametric study) is conducted to define the system response to changes in hydrologic properties or boundary conditions. A direct method for sensitivity analysis using an adjoint form of the flow equation is applied to the conceptualized flow regime in the Leadville limestone aquifer. All steps leading to the final results and conclusions are incorporated in this report. The available data utilized in this study is summarized. The specific conceptual models, defining the areal and vertical averaging of litho-logic units, aquifer properties, fluid properties, and hydrologic boundary conditions, are described in detail. Two models were evaluated in this study: a regional model encompassing the hydrogeologic units above and below the Paradox Formation/Hermosa Group and a refined scale model which incorporated only the post Paradox strata. The results are delineated by the simulated potentiometric surfaces and tables summarizing areal and vertical boundary fluxes, Darcy velocities at specific points, and ground-water travel paths. Results from the adjoint sensitivity analysis include importance functions and sensitivity coefficients, using heads or the average Darcy velocities to represent system response. The reported work is the first stage of an ongoing evaluation of the Gibson Dome area within the Paradox Basin as a potential repository for high-level radioactive wastes

  10. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  11. Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs

    NARCIS (Netherlands)

    R.A. Zuidwijk (Rob)

    2005-01-01

    textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an

  12. Sensitivity Analysis Based on Markovian Integration by Parts Formula

    Directory of Open Access Journals (Sweden)

    Yongsheng Hang

    2017-10-01

    Full Text Available Sensitivity analysis is widely applied in financial risk management and engineering; it describes the variations brought by the changes of parameters. Since the integration by parts technique for Markov chains is well developed in recent years, in this paper we apply it for computation of sensitivity and show the closed-form expressions for two commonly-used time-continuous Markovian models. By comparison, we conclude that our approach outperforms the existing technique of computing sensitivity on Markovian models.

  13. Chemosensitivity of human small cell carcinoma of the lung detected by flow cytometric DNA analysis of drug-induced cell cycle perturbations in vitro

    DEFF Research Database (Denmark)

    Engelholm, S A; Spang-Thomsen, M; Vindeløv, L L

    1986-01-01

    A method based on detection of drug-induced cell cycle perturbation by flow cytometric DNA analysis has previously been described in Ehrlich ascites tumors as a way to estimate chemosensitivity. The method is extended to test human small-cell carcinoma of the lung. Three tumors with different...... sensitivities to melphalan in nude mice were used. Tumors were disaggregated by a combined mechanical and enzymatic method and thereafter have incubated with different doses of melphalan. After incubation the cells were plated in vitro on agar, and drug induced cell cycle changes were monitored by flow...

  14. Parameter uncertainty effects on variance-based sensitivity analysis

    International Nuclear Information System (INIS)

    Yu, W.; Harris, T.J.

    2009-01-01

    In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used

  15. Sensitivity analysis for decision-making using the MORE method-A Pareto approach

    International Nuclear Information System (INIS)

    Ravalico, Jakin K.; Maier, Holger R.; Dandy, Graeme C.

    2009-01-01

    Integrated Assessment Modelling (IAM) incorporates knowledge from different disciplines to provide an overarching assessment of the impact of different management decisions. The complex nature of these models, which often include non-linearities and feedback loops, requires special attention for sensitivity analysis. This is especially true when the models are used to form the basis of management decisions, where it is important to assess how sensitive the decisions being made are to changes in model parameters. This research proposes an extension to the Management Option Rank Equivalence (MORE) method of sensitivity analysis; a new method of sensitivity analysis developed specifically for use in IAM and decision-making. The extension proposes using a multi-objective Pareto optimal search to locate minimum combined parameter changes that result in a change in the preferred management option. It is demonstrated through a case study of the Namoi River, where results show that the extension to MORE is able to provide sensitivity information for individual parameters that takes into account simultaneous variations in all parameters. Furthermore, the increased sensitivities to individual parameters that are discovered when joint parameter variation is taken into account shows the importance of ensuring that any sensitivity analysis accounts for these changes.

  16. Global Qualitative Flow-Path Modeling for Local State Determination in Simulation and Analysis

    Science.gov (United States)

    Malin, Jane T. (Inventor); Fleming, Land D. (Inventor)

    1998-01-01

    For qualitative modeling and analysis, a general qualitative abstraction of power transmission variables (flow and effort) for elements of flow paths includes information on resistance, net flow, permissible directions of flow, and qualitative potential is discussed. Each type of component model has flow-related variables and an associated internal flow map, connected into an overall flow network of the system. For storage devices, the implicit power transfer to the environment is represented by "virtual" circuits that include an environmental junction. A heterogeneous aggregation method simplifies the path structure. A method determines global flow-path changes during dynamic simulation and analysis, and identifies corresponding local flow state changes that are effects of global configuration changes. Flow-path determination is triggered by any change in a flow-related device variable in a simulation or analysis. Components (path elements) that may be affected are identified, and flow-related attributes favoring flow in the two possible directions are collected for each of them. Next, flow-related attributes are determined for each affected path element, based on possibly conflicting indications of flow direction. Spurious qualitative ambiguities are minimized by using relative magnitudes and permissible directions of flow, and by favoring flow sources over effort sources when comparing flow tendencies. The results are output to local flow states of affected components.

  17. Typing Local Control and State Using Flow Analysis

    Science.gov (United States)

    Guha, Arjun; Saftoiu, Claudiu; Krishnamurthi, Shriram

    Programs written in scripting languages employ idioms that confound conventional type systems. In this paper, we highlight one important set of related idioms: the use of local control and state to reason informally about types. To address these idioms, we formalize run-time tags and their relationship to types, and use these to present a novel strategy to integrate typing with flow analysis in a modular way. We demonstrate that in our separation of typing and flow analysis, each component remains conventional, their composition is simple, but the result can handle these idioms better than either one alone.

  18. The iFlow modelling framework v2.4: a modular idealized process-based model for flow and transport in estuaries

    Science.gov (United States)

    Dijkstra, Yoeri M.; Brouwer, Ronald L.; Schuttelaars, Henk M.; Schramkowski, George P.

    2017-07-01

    The iFlow modelling framework is a width-averaged model for the systematic analysis of the water motion and sediment transport processes in estuaries and tidal rivers. The distinctive solution method, a mathematical perturbation method, used in the model allows for identification of the effect of individual physical processes on the water motion and sediment transport and study of the sensitivity of these processes to model parameters. This distinction between processes provides a unique tool for interpreting and explaining hydrodynamic interactions and sediment trapping. iFlow also includes a large number of options to configure the model geometry and multiple choices of turbulence and salinity models. Additionally, the model contains auxiliary components, including one that facilitates easy and fast sensitivity studies. iFlow has a modular structure, which makes it easy to include, exclude or change individual model components, called modules. Depending on the required functionality for the application at hand, modules can be selected to construct anything from very simple quasi-linear models to rather complex models involving multiple non-linear interactions. This way, the model complexity can be adjusted to the application. Once the modules containing the required functionality are selected, the underlying model structure automatically ensures modules are called in the correct order. The model inserts iteration loops over groups of modules that are mutually dependent. iFlow also ensures a smooth coupling of modules using analytical and numerical solution methods. This way the model combines the speed and accuracy of analytical solutions with the versatility of numerical solution methods. In this paper we present the modular structure, solution method and two examples of the use of iFlow. In the examples we present two case studies, of the Yangtze and Scheldt rivers, demonstrating how iFlow facilitates the analysis of model results, the understanding of the

  19. Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model

    Science.gov (United States)

    Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance

    2014-01-01

    Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...

  20. Sensitivity Analysis of Centralized Dynamic Cell Selection

    DEFF Research Database (Denmark)

    Lopez, Victor Fernandez; Alvarez, Beatriz Soret; Pedersen, Klaus I.

    2016-01-01

    and a suboptimal optimization algorithm that nearly achieves the performance of the optimal Hungarian assignment. Moreover, an exhaustive sensitivity analysis with different network and traffic configurations is carried out in order to understand what conditions are more appropriate for the use of the proposed...

  1. Applications of advances in nonlinear sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Werbos, P J

    1982-01-01

    The following paper summarizes the major properties and applications of a collection of algorithms involving differentiation and optimization at minimum cost. The areas of application include the sensitivity analysis of models, new work in statistical or econometric estimation, optimization, artificial intelligence and neuron modelling.

  2. Substance Flow Analysis of Wastes Containing Polybrominated Diphenyl Ethers

    DEFF Research Database (Denmark)

    Vyzinkarova, Dana; Brunner, Paul H.

    2013-01-01

    materials. Therefore, end-of-life (EOL) plastic materials used for construction must be separated and properly treated, for example, in a state-of-the-art municipal solid waste (MSW) incinerator. In the case of cOctaBDE, the main flows are waste electrical and electronic equipment (WEEE) and, possibly......The present article examines flows and stocks of Stockholm Convention regulated pollutants, commercial penta- and octabrominated diphenyl ether (cPentaBDE, cOctaBDE), on a city level. The goals are to (1) identify sources, pathways, and sinks of these compounds in the city of Vienna, (2) determine...... the fractions that reach final sinks, and (3) develop recommendations for waste management to ensure their minimum recycling and maximum transfer to appropriate final sinks. By means of substance flow analysis (SFA) and scenario analysis, it was found that the key flows of cPentaBDE stem from construction...

  3. Annular dispersed flow analysis model by Lagrangian method and liquid film cell method

    International Nuclear Information System (INIS)

    Matsuura, K.; Kuchinishi, M.; Kataoka, I.; Serizawa, A.

    2003-01-01

    A new annular dispersed flow analysis model was developed. In this model, both droplet behavior and liquid film behavior were simultaneously analyzed. Droplet behavior in turbulent flow was analyzed by the Lagrangian method with refined stochastic model. On the other hand, liquid film behavior was simulated by the boundary condition of moving rough wall and liquid film cell model, which was used to estimate liquid film flow rate. The height of moving rough wall was estimated by disturbance wave height correlation. In each liquid film cell, liquid film flow rate was calculated by considering droplet deposition and entrainment flow rate. Droplet deposition flow rate was calculated by Lagrangian method and entrainment flow rate was calculated by entrainment correlation. For the verification of moving rough wall model, turbulent flow analysis results under the annular flow condition were compared with the experimental data. Agreement between analysis results and experimental results were fairly good. Furthermore annular dispersed flow experiments were analyzed, in order to verify droplet behavior model and the liquid film cell model. The experimental results of radial distribution of droplet mass flux were compared with analysis results. The agreement was good under low liquid flow rate condition and poor under high liquid flow rate condition. But by modifying entrainment rate correlation, the agreement become good even under high liquid flow rate. This means that basic analysis method of droplet and liquid film behavior was right. In future work, verification calculation should be carried out under different experimental condition and entrainment ratio correlation also should be corrected

  4. A Calculus for Control Flow Analysis of Security Protocols

    DEFF Research Database (Denmark)

    Buchholtz, Mikael; Nielson, Hanne Riis; Nielson, Flemming

    2004-01-01

    The design of a process calculus for anaysing security protocols is governed by three factors: how to express the security protocol in a precise and faithful manner, how to accommodate the variety of attack scenarios, and how to utilise the strengths (and limit the weaknesses) of the underlying...... analysis methodology. We pursue an analysis methodology based on control flow analysis in flow logic style and we have previously shown its ability to analyse a variety of security protocols. This paper develops a calculus, LysaNS that allows for much greater control and clarity in the description...

  5. Interfacial area transport in a confined Bubbly flow

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S.; Sun, X.; Ishii, M. [Purdue Univ., Lafayette, IN (United States). School of Nuclear Engineering; Lincoln, F. [Bettis Atomic Power Lab., West Mifflin, Bechtel Bettis, Inc., PA (United States)

    2001-07-01

    The interfacial area transport equation applicable to the bubbly flow is presented. The model is evaluated against the data acquired in an adiabatic air-water upward two-phase flow loop with a test section of 20 cm in width and 1 cm in gap. In general, a good agreement, within the measurement error of {+-}10%, is observed for a wide range in the bubbly flow regime. The sensitivity analysis on the individual particle interaction mechanisms demonstrates the active interactions between the bubbles and highlights the mechanisms playing the dominant role in interfacial area transport. (author)

  6. Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.

    Science.gov (United States)

    van Erp, Sara; Mulder, Joris; Oberski, Daniel L

    2017-11-27

    Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Swirl flow analysis in a helical wire inserted tube using CFD code

    International Nuclear Information System (INIS)

    Park, Yusun; Chang, Soon Heung

    2010-01-01

    An analysis on the two-phase flow in a helical wire inserted tube using commercial CFD code, CFX11.0, was performed in bubbly flow and annular flow regions. The analysis method was validated with the experimental results of Takeshima. Bubbly and annular flows in a 10 mm inner diameter tube with varying pitch lengths and inserted wire diameters were simulated using the same analysis methods after validation. The geometry range of p/D was 1-4 and e/D was 0.08-0.12. The results show that the inserted wire with a larger diameter increased swirl flow generation. An increasing swirl flow was seen as the pitch length increased. Regarding pressure loss, smaller pitch lengths and inserted wires with larger diameters resulted in larger pressure loss. The average liquid film thickness increased as the pitch length and the diameter of the inserted wire increased in the annular flow region. Both in the bubbly flow and annular flow regions, the effect of pitch length on swirl flow generation and pressure loss was more significant than that of the inserted wire diameters. Pitch length is a more dominant factor than inserted wire diameter for the design of the swirl flow generator in small diameter tubes.

  8. Coupling Analysis of Low-Speed Multiphase Flow and High-Frequency Electromagnetic Field in a Complex Pipeline Structure

    Directory of Open Access Journals (Sweden)

    Xiaokai Huo

    2014-01-01

    Full Text Available Accurate estimation of water content in an oil-water mixture is a key technology in oil exploration and production. Based on the principles of the microwave transmission line (MTL, the logging probe is an important water content measuring apparatus. However, the effects of mixed fluid flow on the measurement of electromagnetic field parameters are rarely considered. This study presents the coupling model for low-speed multiphase flow and high-frequency electromagnetic field in a complex pipeline structure. We derived the S-parameter equations for the stratified oil/water flow model. The corresponding relationship between the S-parameters and water holdup is established. Evident coupling effects of the fluid flow and the electromagnetic field are confirmed by comparing the calculated S-parameters for both stratified and homogeneous flow patterns. In addition, a multiple-solution problem is analyzed for the inversion of dielectric constant from the S-parameters. The most sensitive phase angle range is determined to improve the detection of variation in the dielectric constant. Suggestions are proposed based on the influence of the oil/water layer on measurement sensitivity to optimize the geometric parameters of a device structure. The method proposed elucidates how accuracy and sensitivity can be improved in water holdup measurements under high water content conditions.

  9. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Science.gov (United States)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  10. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Energy Technology Data Exchange (ETDEWEB)

    Huan, Xun [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Geraci, Gianluca [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vane, Zachary P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lacaze, Guilhem [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Oefelein, Joseph C. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  11. An overview of the design and analysis of simulation experiments for sensitivity analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2005-01-01

    Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models. This review surveys 'classic' and 'modern' designs for experiments with simulation models. Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc. These designs

  12. Rotating permanent magnet excitation for blood flow measurement.

    Science.gov (United States)

    Nair, Sarath S; Vinodkumar, V; Sreedevi, V; Nagesh, D S

    2015-11-01

    A compact, portable and improved blood flow measurement system for an extracorporeal circuit having a rotating permanent magnetic excitation scheme is described in this paper. The system consists of a set of permanent magnets rotating near blood or any conductive fluid to create high-intensity alternating magnetic field in it and inducing a sinusoidal varying voltage across the column of fluid. The induced voltage signal is acquired, conditioned and processed to determine its flow rate. Performance analysis shows that a sensitivity of more than 250 mV/lpm can be obtained, which is more than five times higher than conventional flow measurement systems. Choice of rotating permanent magnet instead of an electromagnetic core generates alternate magnetic field of smooth sinusoidal nature which in turn reduces switching and interference noises. These results in reduction in complex electronic circuitry required for processing the signal to a great extent and enable the flow measuring device to be much less costlier, portable and light weight. The signal remains steady even with changes in environmental conditions and has an accuracy of greater than 95%. This paper also describes the construction details of the prototype, the factors affecting sensitivity and detailed performance analysis at various operating conditions.

  13. Sensitivity analysis of a greedy heuristic for knapsack problems

    NARCIS (Netherlands)

    Ghosh, D; Chakravarti, N; Sierksma, G

    2006-01-01

    In this paper, we carry out parametric analysis as well as a tolerance limit based sensitivity analysis of a greedy heuristic for two knapsack problems-the 0-1 knapsack problem and the subset sum problem. We carry out the parametric analysis based on all problem parameters. In the tolerance limit

  14. Visualization and Hierarchical Analysis of Flow in Discrete Fracture Network Models

    Science.gov (United States)

    Aldrich, G. A.; Gable, C. W.; Painter, S. L.; Makedonska, N.; Hamann, B.; Woodring, J.

    2013-12-01

    Flow and transport in low permeability fractured rock is primary in interconnected fracture networks. Prediction and characterization of flow and transport in fractured rock has important implications in underground repositories for hazardous materials (eg. nuclear and chemical waste), contaminant migration and remediation, groundwater resource management, and hydrocarbon extraction. We have developed methods to explicitly model flow in discrete fracture networks and track flow paths using passive particle tracking algorithms. Visualization and analysis of particle trajectory through the fracture network is important to understanding fracture connectivity, flow patterns, potential contaminant pathways and fast paths through the network. However, occlusion due to the large number of highly tessellated and intersecting fracture polygons preclude the effective use of traditional visualization methods. We would also like quantitative analysis methods to characterize the trajectory of a large number of particle paths. We have solved these problems by defining a hierarchal flow network representing the topology of particle flow through the fracture network. This approach allows us to analyses the flow and the dynamics of the system as a whole. We are able to easily query the flow network, and use paint-and-link style framework to filter the fracture geometry and particle traces based on the flow analytics. This allows us to greatly reduce occlusion while emphasizing salient features such as the principal transport pathways. Examples are shown that demonstrate the methodology and highlight how use of this new method allows quantitative analysis and characterization of flow and transport in a number of representative fracture networks.

  15. Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS

    International Nuclear Information System (INIS)

    Brown, C.S.; Zhang, Hongbin

    2016-01-01

    VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis. A 2 × 2 fuel assembly model was developed and simulated by VERA-CS, and uncertainty quantification and sensitivity analysis were performed with fourteen uncertain input parameters. The minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surface temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. Parameters used as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.

  16. Sensitivity analysis on various parameters for lattice analysis of DUPIC fuel with WIMS-AECL code

    Energy Technology Data Exchange (ETDEWEB)

    Roh, Gyu Hong; Choi, Hang Bok; Park, Jee Won [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    The code WIMS-AECL has been used for the lattice analysis of DUPIC fuel. The lattice parameters calculated by the code is sensitive to the choice of number of parameters, such as the number of tracking lines, number of condensed groups, mesh spacing in the moderator region, other parameters vital to the calculation of probabilities and burnup analysis. We have studied this sensitivity with respect to these parameters and recommend their proper values which are necessary for carrying out the lattice analysis of DUPIC fuel.

  17. Sensitivity analysis on various parameters for lattice analysis of DUPIC fuel with WIMS-AECL code

    Energy Technology Data Exchange (ETDEWEB)

    Roh, Gyu Hong; Choi, Hang Bok; Park, Jee Won [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    The code WIMS-AECL has been used for the lattice analysis of DUPIC fuel. The lattice parameters calculated by the code is sensitive to the choice of number of parameters, such as the number of tracking lines, number of condensed groups, mesh spacing in the moderator region, other parameters vital to the calculation of probabilities and burnup analysis. We have studied this sensitivity with respect to these parameters and recommend their proper values which are necessary for carrying out the lattice analysis of DUPIC fuel.

  18. Generalized tolerance sensitivity and DEA metric sensitivity

    OpenAIRE

    Neralić, Luka; E. Wendell, Richard

    2015-01-01

    This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA). Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.

  19. Generalized tolerance sensitivity and DEA metric sensitivity

    Directory of Open Access Journals (Sweden)

    Luka Neralić

    2015-03-01

    Full Text Available This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA. Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.

  20. Demonstration sensitivity analysis for RADTRAN III

    International Nuclear Information System (INIS)

    Neuhauser, K.S.; Reardon, P.C.

    1986-10-01

    A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves

  1. Resource recovery from residual household waste: An application of exergy flow analysis and exergetic life cycle assessment.

    Science.gov (United States)

    Laner, David; Rechberger, Helmut; De Soete, Wouter; De Meester, Steven; Astrup, Thomas F

    2015-12-01

    Exergy is based on the Second Law of thermodynamics and can be used to express physical and chemical potential and provides a unified measure for resource accounting. In this study, exergy analysis was applied to four residual household waste management scenarios with focus on the achieved resource recovery efficiencies. The calculated exergy efficiencies were used to compare the scenarios and to evaluate the applicability of exergy-based measures for expressing resource quality and for optimizing resource recovery. Exergy efficiencies were determined based on two approaches: (i) exergy flow analysis of the waste treatment system under investigation and (ii) exergetic life cycle assessment (LCA) using the Cumulative Exergy Extraction from the Natural Environment (CEENE) as a method for resource accounting. Scenario efficiencies of around 17-27% were found based on the exergy flow analysis (higher efficiencies were associated with high levels of material recycling), while the scenario efficiencies based on the exergetic LCA lay in a narrow range around 14%. Metal recovery was beneficial in both types of analyses, but had more influence on the overall efficiency in the exergetic LCA approach, as avoided burdens associated with primary metal production were much more important than the exergy content of the recovered metals. On the other hand, plastic recovery was highly beneficial in the exergy flow analysis, but rather insignificant in exergetic LCA. The two approaches thereby offered different quantitative results as well as conclusions regarding material recovery. With respect to resource quality, the main challenge for the exergy flow analysis is the use of exergy content and exergy losses as a proxy for resource quality and resource losses, as exergy content is not per se correlated with the functionality of a material. In addition, the definition of appropriate waste system boundaries is critical for the exergy efficiencies derived from the flow analysis, as it

  2. Development of a detailed core flow analysis code for prismatic fuel reactors

    International Nuclear Information System (INIS)

    Bennett, R.G.

    1990-01-01

    The detailed analysis of the core flow distribution in prismatic fuel reactors is of interest for modular high-temperature gas-cooled reactor (MHTGR) design and safety analyses. Such analyses involve the steady-state flow of helium through highly cross-connected flow paths in and around the prismatic fuel elements. Several computer codes have been developed for this purpose. However, since they are proprietary codes, they are not generally available for independent MHTGR design confirmation. The previously developed codes do not consider the exchange or diversion of flow between individual bypass gaps with much detail. Such a capability could be important in the analysis of potential fuel block motion, such as occurred in the Fort St. Vrain reactor, or for the analysis of the conditions around a flow blockage or misloaded fuel block. This work develops a computer code with fairly general-purpose capabilities for modeling the flow in regions of prismatic fuel cores. The code, called BYPASS solves a finite difference control volume formulation of the compressible, steady-state fluid flow in highly cross-connected flow paths typical of the MHTGR

  3. Sensitivity analysis of water consumption in an office building

    Science.gov (United States)

    Suchacek, Tomas; Tuhovcak, Ladislav; Rucka, Jan

    2018-02-01

    This article deals with sensitivity analysis of real water consumption in an office building. During a long-term real study, reducing of pressure in its water connection was simulated. A sensitivity analysis of uneven water demand was conducted during working time at various provided pressures and at various time step duration. Correlations between maximal coefficients of water demand variation during working time and provided pressure were suggested. The influence of provided pressure in the water connection on mean coefficients of water demand variation was pointed out, altogether for working hours of all days and separately for days with identical working hours.

  4. An Overview of the Design and Analysis of Simulation Experiments for Sensitivity Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2004-01-01

    Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models.This review surveys classic and modern designs for experiments with simulation models.Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc.These designs assume a

  5. Entropy feature extraction on flow pattern of gas/liquid two-phase flow based on cross-section measurement

    International Nuclear Information System (INIS)

    Han, J; Dong, F; Xu, Y Y

    2009-01-01

    This paper introduces the fundamental of cross-section measurement system based on Electrical Resistance Tomography (ERT). The measured data of four flow regimes of the gas/liquid two-phase flow in horizontal pipe flow are obtained by an ERT system. For the measured data, five entropies are extracted to analyze the experimental data according to the different flow regimes, and the analysis method is examined and compared in three different perspectives. The results indicate that three different perspectives of entropy-based feature extraction are sensitive to the flow pattern transition in gas/liquid two-phase flow. By analyzing the results of three different perspectives with the changes of gas/liquid two-phase flow parameters, the dynamic structures of gas/liquid two-phase flow is obtained, and they also provide an efficient supplementary to reveal the flow pattern transition mechanism of gas/liquid two-phase flow. Comparison of the three different methods of feature extraction shows that the appropriate entropy should be used for the identification and prediction of flow regimes.

  6. Airfoil data sensitivity analysis for actuator disc simulations used in wind turbine applications

    International Nuclear Information System (INIS)

    Nilsson, Karl; Breton, Simon-Philippe; Ivanell, Stefan; Sørensen, Jens N

    2014-01-01

    To analyse the sensitivity of blade geometry and airfoil characteristics on the prediction of performance characteristics of wind farms, large-eddy simulations using an actuator disc (ACD) method are performed for three different blade/airfoil configurations. The aim of the study is to determine how the mean characteristics of wake flow, mean power production and thrust depend on the choice of airfoil data and blade geometry. In order to simulate realistic conditions, pre-generated turbulence and wind shear are imposed in the computational domain. Using three different turbulence intensities and varying the spacing between the turbines, the flow around 4-8 aligned turbines is simulated. The analysis is based on normalized mean streamwise velocity, turbulence intensity, relative mean power production and thrust. From the computations it can be concluded that the actual airfoil characteristics and blade geometry only are of importance at very low inflow turbulence. At realistic turbulence conditions for an atmospheric boundary layer the specific blade characteristics play an minor role on power performance and the resulting wake characteristics. The results therefore give a hint that the choice of airfoil data in ACD simulations is not crucial if the intention of the simulations is to compute mean wake characteristics using a turbulent inflow

  7. Sensitivity analysis in the WWTP modelling community – new opportunities and applications

    DEFF Research Database (Denmark)

    Sin, Gürkan; Ruano, M.V.; Neumann, Marc B.

    2010-01-01

    design (BSM1 plant layout) using Standardized Regression Coefficients (SRC) and (ii) Applying sensitivity analysis to help fine-tuning a fuzzy controller for a BNPR plant using Morris Screening. The results obtained from each case study are then critically discussed in view of practical applications......A mainstream viewpoint on sensitivity analysis in the wastewater modelling community is that it is a first-order differential analysis of outputs with respect to the parameters – typically obtained by perturbing one parameter at a time with a small factor. An alternative viewpoint on sensitivity...

  8. Contribution to the sample mean plot for graphical and numerical sensitivity analysis

    International Nuclear Information System (INIS)

    Bolado-Lavin, R.; Castaings, W.; Tarantola, S.

    2009-01-01

    The contribution to the sample mean plot, originally proposed by Sinclair, is revived and further developed as practical tool for global sensitivity analysis. The potentials of this simple and versatile graphical tool are discussed. Beyond the qualitative assessment provided by this approach, a statistical test is proposed for sensitivity analysis. A case study that simulates the transport of radionuclides through the geosphere from an underground disposal vault containing nuclear waste is considered as a benchmark. The new approach is tested against a very efficient sensitivity analysis method based on state dependent parameter meta-modelling

  9. Analysis of the three dimensional flow in a turbine scroll

    Science.gov (United States)

    Hamed, A.; Baskharone, E.

    1979-01-01

    The present analysis describes the three-dimensional compressible inviscid flow in the scroll and the vaneless nozzle of a radial inflow turbine. The solution to this flow field, which is further complicated by the geometrical shape of the boundaries, is obtained using the finite element method. Symmetric and nonsymmetric scroll cross sectional geometries are investigated to determine their effect on the general flow field and on the exit flow conditions.

  10. An inter-laboratory comparison of PNH clone detection by high-sensitivity flow cytometry in a Russian cohort.

    Science.gov (United States)

    Sipol, Alexandra A; Babenko, Elena V; Borisov, Vyacheslav I; Naumova, Elena V; Boyakova, Elena V; Yakunin, Dimitry I; Glazanova, Tatyana V; Chubukina, Zhanna V; Pronkina, Natalya V; Popov, Alexander M; Saveliev, Leonid I; Lugovskaya, Svetlana A; Lisukov, Igor A; Kulagin, Alexander D; Illingworth, Andrea J

    2015-01-01

    Paroxysmal nocturnal hemoglobinuria (PNH) is an acquired clonal stem cell disorder characterized by partial or absolute deficiency of glycophosphatidyl-inositol (GPI) anchor-linked surface proteins on blood cells. A lack of precise diagnostic standards for flow cytometry has hampered useful comparisons of data between laboratories. We report data from the first study evaluating the reproducibility of high-sensitivity flow cytometry for PNH in Russia. PNH clone sizes were determined at diagnosis in PNH patients at a central laboratory and compared with follow-up measurements in six laboratories across the country. Analyses in each laboratory were performed according to recommendations from the International Clinical Cytometry Society (ICCS) and the more recent 'practical guidelines'. Follow-up measurements were compared with each other and with the values determined at diagnosis. PNH clone size measurements were determined in seven diagnosed PNH patients (five females, two males: mean age 37 years); five had a history of aplastic anemia and three (one with and two without aplastic anemia) had severe hemolytic PNH and elevated plasma lactate dehydrogenase. PNH clone sizes at diagnosis were low in patients with less severe clinical symptoms (0.41-9.7% of granulocytes) and high in patients with severe symptoms (58-99%). There were only minimal differences in the follow-up clone size measurement for each patient between the six laboratories, particularly in those with high values at diagnosis. The ICCS-recommended high-sensitivity flow cytometry protocol was effective for detecting major and minor PNH clones in Russian PNH patients, and showed high reproducibility between laboratories.

  11. Personalization of models with many model parameters : an efficient sensitivity analysis approach

    NARCIS (Netherlands)

    Donders, W.P.; Huberts, W.; van de Vosse, F.N.; Delhaas, T.

    2015-01-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of

  12. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  13. Seismic analysis of steam generator and parameter sensitivity studies

    International Nuclear Information System (INIS)

    Qian Hao; Xu Dinggen; Yang Ren'an; Liang Xingyun

    2013-01-01

    Background: The steam generator (SG) serves as the primary means for removing the heat generated within the reactor core and is part of the reactor coolant system (RCS) pressure boundary. Purpose: Seismic analysis in required for SG, whose seismic category is Cat. I. Methods: The analysis model of SG is created with moisture separator assembly and tube bundle assembly herein. The seismic analysis is performed with RCS pipe and Reactor Pressure Vessel (RPV). Results: The seismic stress results of SG are obtained. In addition, parameter sensitivities of seismic analysis results are studied, such as the effect of another SG, support, anti-vibration bars (AVBs), and so on. Our results show that seismic results are sensitive to support and AVBs setting. Conclusions: The guidance and comments on these parameters are summarized for equipment design and analysis, which should be focused on in future new type NPP SG's research and design. (authors)

  14. Sensitivity analysis of a coupled hydrodynamic-vegetation model using the effectively subsampled quadratures method (ESQM v5.2)

    Science.gov (United States)

    Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis

    2017-12-01

    Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) modeling system. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as the Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant stem density, height, and, to a lesser degree, diameter. Wave dissipation is mostly dependent on the variation in plant stem density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance to optimize efforts and reduce exploration of parameter space for future observational and modeling work.

  15. An Application of Monte-Carlo-Based Sensitivity Analysis on the Overlap in Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    S. Razmyan

    2012-01-01

    Full Text Available Discriminant analysis (DA is used for the measurement of estimates of a discriminant function by minimizing their group misclassifications to predict group membership of newly sampled data. A major source of misclassification in DA is due to the overlapping of groups. The uncertainty in the input variables and model parameters needs to be properly characterized in decision making. This study combines DEA-DA with a sensitivity analysis approach to an assessment of the influence of banks’ variables on the overall variance in overlap in a DA in order to determine which variables are most significant. A Monte-Carlo-based sensitivity analysis is considered for computing the set of first-order sensitivity indices of the variables to estimate the contribution of each uncertain variable. The results show that the uncertainties in the loans granted and different deposit variables are more significant than uncertainties in other banks’ variables in decision making.

  16. Sensitive Diagnostics for Chemically Reacting Flows

    KAUST Repository

    Farooq, Aamir

    2015-01-01

    This talk will feature latest diagnostic developments for sensitive detection of gas temperature and important combustion species. Advanced optical strategies, such as intrapulse chirping, wavelength modulation, and cavity ringdown are employed.

  17. Sensitive Diagnostics for Chemically Reacting Flows

    KAUST Repository

    Farooq, Aamir

    2015-11-02

    This talk will feature latest diagnostic developments for sensitive detection of gas temperature and important combustion species. Advanced optical strategies, such as intrapulse chirping, wavelength modulation, and cavity ringdown are employed.

  18. Steady state likelihood ratio sensitivity analysis for stiff kinetic Monte Carlo simulations.

    Science.gov (United States)

    Núñez, M; Vlachos, D G

    2015-01-28

    Kinetic Monte Carlo simulation is an integral tool in the study of complex physical phenomena present in applications ranging from heterogeneous catalysis to biological systems to crystal growth and atmospheric sciences. Sensitivity analysis is useful for identifying important parameters and rate-determining steps, but the finite-difference application of sensitivity analysis is computationally demanding. Techniques based on the likelihood ratio method reduce the computational cost of sensitivity analysis by obtaining all gradient information in a single run. However, we show that disparity in time scales of microscopic events, which is ubiquitous in real systems, introduces drastic statistical noise into derivative estimates for parameters affecting the fast events. In this work, the steady-state likelihood ratio sensitivity analysis is extended to singularly perturbed systems by invoking partial equilibration for fast reactions, that is, by working on the fast and slow manifolds of the chemistry. Derivatives on each time scale are computed independently and combined to the desired sensitivity coefficients to considerably reduce the noise in derivative estimates for stiff systems. The approach is demonstrated in an analytically solvable linear system.

  19. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  20. Inline chemical process analysis in micro-plants based on thermoelectric flow and impedimetric sensors

    International Nuclear Information System (INIS)

    Jacobs, T; Kutzner, C; Hauptmann, P; Kropp, M; Lang, W; Brokmann, G; Steinke, A; Kienle, A

    2010-01-01

    In micro-plants, as used in chemical micro-process engineering, an integrated inline analytics is regarded as an important factor for the development and optimization of chemical processes. Up to now, there is a lack of sensitive, robust and low-priced micro-sensors for monitoring mixing and chemical conversion in micro-fluidic channels. In this paper a novel sensor system combining an impedimetric sensor and a novel pressure stable thermoelectric flow sensor for monitoring chemical reactions in micro-plants is presented. The CMOS-technology-based impedimetric sensor mainly consists of two capacitively coupled interdigital electrodes on a silicon chip. The thermoelectric flow sensor consists of a heater in between two thermopiles on a perforated membrane. The pulsed and constant current feeds of the heater were analyzed. Both sensors enable the analysis of chemical conversion by means of changes in the thermal and electrical properties of the liquid. The homogeneously catalyzed synthesis of n-butyl acetate as a chemical model system was studied. Experimental results revealed that in an overpressure regime, relative changes of less than 1% in terms of thermal and electrical properties can be detected. Furthermore, the transition from one to two liquid phases accompanied by the change in slug flow conditions could be reproducibly detected