WorldWideScience

Sample records for parametric sensitivity analysis

  1. Supercritical extraction of oleaginous: parametric sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Santos M.M.

    2000-01-01

    Full Text Available The economy has become universal and competitive, thus the industries of vegetable oil extraction must advance in the sense of minimising production costs and, at the same time, generating products that obey more rigorous patterns of quality, including solutions that do not damage the environment. The conventional oilseed processing uses hexane as solvent. However, this solvent is toxic and highly flammable. Thus the search of substitutes for hexane in oleaginous extraction process has increased in the last years. The supercritical carbon dioxide is a potential substitute for hexane, but it is necessary more detailed studies to understand the phenomena taking place in such process. Thus, in this work a diffusive model for semi-continuous (batch for the solids and continuous for the solvent isothermal and isobaric extraction process using supercritical carbon dioxide is presented and submitted to a parametric sensitivity analysis by means of a factorial design in two levels. The model parameters were disturbed and their main effects analysed, so that it is possible to propose strategies for high performance operation.

  2. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  3. Regional and parametric sensitivity analysis of Sobol' indices

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2015-01-01

    Nowadays, utilizing the Monte Carlo estimators for variance-based sensitivity analysis has gained sufficient popularity in many research fields. These estimators are usually based on n+2 sample matrices well designed for computing both the main and total effect indices, where n is the input dimension. The aim of this paper is to use such n+2 sample matrices to investigate how the main and total effect indices change when the uncertainty of the model inputs are reduced. For this purpose, the regional main and total effect functions are defined for measuring the changes on the main and total effect indices when the distribution range of one input is reduced, and the parametric main and total effect functions are introduced to quantify the residual main and total effect indices due to the reduced variance of one input. Monte Carlo estimators are derived for all the developed sensitivity concepts based on the n+2 samples matrices originally used for computing the main and total effect indices, thus no extra computational cost is introduced. The Ishigami function, a nonlinear model and a planar ten-bar structure are utilized for illustrating the developed sensitivity concepts, and for demonstrating the efficiency and accuracy of the derived Monte Carlo estimators. - Highlights: • The regional main and total effect functions are developed. • The parametric main and total effect functions are introduced. • The proposed sensitivity functions are all generalizations of Sobol' indices. • The Monte Carlo estimators are derived for the four sensitivity functions. • The computational cost of the estimators is the same as that of Sobol' indices

  4. Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs

    NARCIS (Netherlands)

    R.A. Zuidwijk (Rob)

    2005-01-01

    textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an

  5. Analytic central path, sensitivity analysis and parametric linear programming

    NARCIS (Netherlands)

    A.G. Holder; J.F. Sturm; S. Zhang (Shuzhong)

    1998-01-01

    textabstractIn this paper we consider properties of the central path and the analytic center of the optimal face in the context of parametric linear programming. We first show that if the right-hand side vector of a standard linear program is perturbed, then the analytic center of the optimal face

  6. Parametric Sensitivity Analysis of the WAVEWATCH III Model

    Directory of Open Access Journals (Sweden)

    Beng-Chun Lee

    2009-01-01

    Full Text Available The parameters in numerical wave models need to be calibrated be fore a model can be applied to a specific region. In this study, we selected the 8 most important parameters from the source term of the WAVEWATCH III model and subjected them to sensitivity analysis to evaluate the sensitivity of the WAVEWATCH III model to the selected parameters to determine how many of these parameters should be considered for further discussion, and to justify the significance priority of each parameter. After ranking each parameter by sensitivity and assessing their cumulative impact, we adopted the ARS method to search for the optimal values of those parameters to which the WAVEWATCH III model is most sensitive by comparing modeling results with ob served data at two data buoys off the coast of north eastern Taiwan; the goal being to find optimal parameter values for improved modeling of wave development. The procedure adopting optimal parameters in wave simulations did improve the accuracy of the WAVEWATCH III model in comparison to default runs based on field observations at two buoys.

  7. Parametric Sensitivity Analysis of Oscillatory Delay Systems with an Application to Gene Regulation.

    Science.gov (United States)

    Ingalls, Brian; Mincheva, Maya; Roussel, Marc R

    2017-07-01

    A parametric sensitivity analysis for periodic solutions of delay-differential equations is developed. Because phase shifts cause the sensitivity coefficients of a periodic orbit to diverge, we focus on sensitivities of the extrema, from which amplitude sensitivities are computed, and of the period. Delay-differential equations are often used to model gene expression networks. In these models, the parametric sensitivities of a particular genotype define the local geometry of the evolutionary landscape. Thus, sensitivities can be used to investigate directions of gradual evolutionary change. An oscillatory protein synthesis model whose properties are modulated by RNA interference is used as an example. This model consists of a set of coupled delay-differential equations involving three delays. Sensitivity analyses are carried out at several operating points. Comments on the evolutionary implications of the results are offered.

  8. Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs

    OpenAIRE

    Zuidwijk, Rob

    2005-01-01

    textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an optimal solution are investigated, and the optimal solution is studied on a so-called critical range of the initial data, in which certain properties such as the optimal basis in linear programming are ...

  9. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Science.gov (United States)

    Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2018-03-01

    We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  10. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Directory of Open Access Journals (Sweden)

    Jinchao Feng

    2018-03-01

    Full Text Available We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data. The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  11. Parametric sensitivity analysis of an agro-economic model of management of irrigation water

    Science.gov (United States)

    El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse

    2015-04-01

    The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.

  12. Performances of non-parametric statistics in sensitivity analysis and parameter ranking

    International Nuclear Information System (INIS)

    Saltelli, A.

    1987-01-01

    Twelve parametric and non-parametric sensitivity analysis techniques are compared in the case of non-linear model responses. The test models used are taken from the long-term risk analysis for the disposal of high level radioactive waste in a geological formation. They describe the transport of radionuclides through a set of engineered and natural barriers from the repository to the biosphere and to man. The output data from these models are the dose rates affecting the maximum exposed individual of a critical group at a given point in time. All the techniques are applied to the output from the same Monte Carlo simulations, where a modified version of Latin Hypercube method is used for the sample selection. Hypothesis testing is systematically applied to quantify the degree of confidence in the results given by the various sensitivity estimators. The estimators are ranked according to their robustness and stability, on the basis of two test cases. The conclusions are that no estimator can be considered the best from all points of view and recommend the use of more than just one estimator in sensitivity analysis

  13. Parametric sensitivity analysis for stochastic molecular systems using information theoretic metrics

    Energy Technology Data Exchange (ETDEWEB)

    Tsourtis, Anastasios, E-mail: tsourtis@uoc.gr [Department of Mathematics and Applied Mathematics, University of Crete, Crete (Greece); Pantazis, Yannis, E-mail: pantazis@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States); Harmandaris, Vagelis, E-mail: harman@uoc.gr [Department of Mathematics and Applied Mathematics, University of Crete, and Institute of Applied and Computational Mathematics (IACM), Foundation for Research and Technology Hellas (FORTH), GR-70013 Heraklion, Crete (Greece)

    2015-07-07

    In this paper, we present a parametric sensitivity analysis (SA) methodology for continuous time and continuous space Markov processes represented by stochastic differential equations. Particularly, we focus on stochastic molecular dynamics as described by the Langevin equation. The utilized SA method is based on the computation of the information-theoretic (and thermodynamic) quantity of relative entropy rate (RER) and the associated Fisher information matrix (FIM) between path distributions, and it is an extension of the work proposed by Y. Pantazis and M. A. Katsoulakis [J. Chem. Phys. 138, 054115 (2013)]. A major advantage of the pathwise SA method is that both RER and pathwise FIM depend only on averages of the force field; therefore, they are tractable and computable as ergodic averages from a single run of the molecular dynamics simulation both in equilibrium and in non-equilibrium steady state regimes. We validate the performance of the extended SA method to two different molecular stochastic systems, a standard Lennard-Jones fluid and an all-atom methane liquid, and compare the obtained parameter sensitivities with parameter sensitivities on three popular and well-studied observable functions, namely, the radial distribution function, the mean squared displacement, and the pressure. Results show that the RER-based sensitivities are highly correlated with the observable-based sensitivities.

  14. Parametric sensitivity analysis for biochemical reaction networks based on pathwise information theory.

    Science.gov (United States)

    Pantazis, Yannis; Katsoulakis, Markos A; Vlachos, Dionisios G

    2013-10-22

    Stochastic modeling and simulation provide powerful predictive methods for the intrinsic understanding of fundamental mechanisms in complex biochemical networks. Typically, such mathematical models involve networks of coupled jump stochastic processes with a large number of parameters that need to be suitably calibrated against experimental data. In this direction, the parameter sensitivity analysis of reaction networks is an essential mathematical and computational tool, yielding information regarding the robustness and the identifiability of model parameters. However, existing sensitivity analysis approaches such as variants of the finite difference method can have an overwhelming computational cost in models with a high-dimensional parameter space. We develop a sensitivity analysis methodology suitable for complex stochastic reaction networks with a large number of parameters. The proposed approach is based on Information Theory methods and relies on the quantification of information loss due to parameter perturbations between time-series distributions. For this reason, we need to work on path-space, i.e., the set consisting of all stochastic trajectories, hence the proposed approach is referred to as "pathwise". The pathwise sensitivity analysis method is realized by employing the rigorously-derived Relative Entropy Rate, which is directly computable from the propensity functions. A key aspect of the method is that an associated pathwise Fisher Information Matrix (FIM) is defined, which in turn constitutes a gradient-free approach to quantifying parameter sensitivities. The structure of the FIM turns out to be block-diagonal, revealing hidden parameter dependencies and sensitivities in reaction networks. As a gradient-free method, the proposed sensitivity analysis provides a significant advantage when dealing with complex stochastic systems with a large number of parameters. In addition, the knowledge of the structure of the FIM can allow to efficiently address

  15. Parametric sensitivity analysis for the helium dimers on a model potential

    Directory of Open Access Journals (Sweden)

    Nelson Henrique Teixeira Lemes

    2012-01-01

    Full Text Available Potential parameters sensitivity analysis for helium unlike molecules, HeNe, HeAr, HeKr and HeXe is the subject of this work. Number of bound states these rare gas dimers can support, for different angular momentum, will be presented and discussed. The variable phase method, together with the Levinson's theorem, is used to explore the quantum scattering process at very low collision energy using the Tang and Toennies potential. These diatomic dimers can support a bound state even for relative angular momentum equal to five, as in HeXe. Vibrational excited states, with zero angular momentum, are also possible for HeKr and HeXe. Results from sensitive analysis will give acceptable order of magnitude on potentials parameters.

  16. Parametric sensitivity analysis for techno-economic parameters in Indian power sector

    International Nuclear Information System (INIS)

    Mallah, Subhash; Bansal, N.K.

    2011-01-01

    Sensitivity analysis is a technique that evaluates the model response to changes in input assumptions. Due to uncertain prices of primary fuels in the world market, Government regulations for sustainability and various other technical parameters there is a need to analyze the techno-economic parameters which play an important role in policy formulations. This paper examines the variations in technical as well as economic parameters that can mostly affect the energy policy of India. MARKAL energy simulation model has been used to analyze the uncertainty in all techno-economic parameters. Various ranges of input parameters are adopted from previous studies. The results show that at lower discount rate coal is the least preferred technology and correspondingly carbon emission reduction. With increased gas and nuclear fuel prices they disappear from the allocations of energy mix.

  17. Parametric sensitivity analysis of a SOLRGT system with the indirect upgrading of low/mid-temperature solar heat

    International Nuclear Information System (INIS)

    Li, Yuan Yuan; Zhang, Na; Cai, Rui Xian

    2012-01-01

    Highlights: ► A solar-assisted methane chemically recuperated gas turbine cycle has been proposed. ► The parametric sensitivity analysis of a SOLRGT system has been carried out. ► The concept of indirect upgrading of solar heat proves to be feasible. -- Abstract: Development of novel solar–fossil fuel hybrid system is important for the efficient utilization of low temperature solar heat. A solar-assisted methane chemically recuperated gas turbine (SOLRGT) system was proposed by Zhang and co-worker, which integrated solar heat into a high efficiency power system. The low temperature solar heat is first converted into vapor latent heat provided for a reformer, and then indirectly upgraded to high-grade generated syngas chemical energy by the reformation reaction. In this paper, based on the above mentioned cycle, a parametric analysis is performed using ASPEN PLUS code to further evaluate the effect of key thermodynamics parameters on the SOLRGT performance. It can be shown that solar collector temperature, steam/air mass ratio, turbine inlet pressure, and turbine inlet temperature have significant effects on system efficiency, solar-to-electricity efficiency, fossil fuel saving ratio, specific CO 2 emission and so on. The solar collector temperature is varied between 140 and 240 °C and the maximum net solar-to-electricity efficiency and system efficiency for a given turbine inlet condition (turbine inlet temperature of 1308 °C and pressure ratio of 15) is 30.2% and 52.9%, respectively. The fossil fuel saving ratio can reach up to 21.8% and the reduction of specific CO 2 emission is also 21.8% compared to the reference system. The system performance is promising for an optimum pressure ratio at a given turbine inlet temperature.

  18. Efficient thin-film stack characterization using parametric sensitivity analysis for spectroscopic ellipsometry in semiconductor device fabrication

    Energy Technology Data Exchange (ETDEWEB)

    Likhachev, D.V., E-mail: dmitriy.likhachev@globalfoundries.com

    2015-08-31

    During semiconductor device fabrication, control of the layer thicknesses is an important task for in-line metrology since the correct thickness values are essential for proper device performance. At the present time, ellipsometry is widely used for routine process monitoring and process improvement as well as characterization of various materials in the modern nanoelectronic manufacturing. The wide recognition of this technique is based on its non-invasive, non-intrusive and non-destructive nature, high measurement precision, accuracy and speed, and versatility to characterize practically all types of materials used in modern semiconductor industry (dielectrics, semiconductors, metals, polymers, etc.). However, it requires the use of one of the multi-parameter non-linear optimization methods due to its indirect nature. This fact creates a big challenge for analysis of multilayered structures since the number of simultaneously determined model parameters, for instance, thin film thicknesses and model variables related to film optical properties, should be restricted due to parameter cross-correlations. In this paper, we use parametric sensitivity analysis to evaluate the importance of various model parameters and to suggest their optimal search ranges. In this work, the method is applied practically for analysis of a few structures with up to five-layered film stack. It demonstrates an evidence-based improvement in accuracy of multilayered thin-film thickness measurements which suggests that the proposed approach can be useful for industrial applications. - Highlights: • An improved method for multilayered thin-film stack characterization is proposed. • The screening-type technique based on so-called “elementary effects” was employed. • The model parameters were ranked according to relative importance for model output. • The method is tested using two examples of complex thin-film stack characterization. • The approach can be useful in many practical

  19. Efficient thin-film stack characterization using parametric sensitivity analysis for spectroscopic ellipsometry in semiconductor device fabrication

    International Nuclear Information System (INIS)

    Likhachev, D.V.

    2015-01-01

    During semiconductor device fabrication, control of the layer thicknesses is an important task for in-line metrology since the correct thickness values are essential for proper device performance. At the present time, ellipsometry is widely used for routine process monitoring and process improvement as well as characterization of various materials in the modern nanoelectronic manufacturing. The wide recognition of this technique is based on its non-invasive, non-intrusive and non-destructive nature, high measurement precision, accuracy and speed, and versatility to characterize practically all types of materials used in modern semiconductor industry (dielectrics, semiconductors, metals, polymers, etc.). However, it requires the use of one of the multi-parameter non-linear optimization methods due to its indirect nature. This fact creates a big challenge for analysis of multilayered structures since the number of simultaneously determined model parameters, for instance, thin film thicknesses and model variables related to film optical properties, should be restricted due to parameter cross-correlations. In this paper, we use parametric sensitivity analysis to evaluate the importance of various model parameters and to suggest their optimal search ranges. In this work, the method is applied practically for analysis of a few structures with up to five-layered film stack. It demonstrates an evidence-based improvement in accuracy of multilayered thin-film thickness measurements which suggests that the proposed approach can be useful for industrial applications. - Highlights: • An improved method for multilayered thin-film stack characterization is proposed. • The screening-type technique based on so-called “elementary effects” was employed. • The model parameters were ranked according to relative importance for model output. • The method is tested using two examples of complex thin-film stack characterization. • The approach can be useful in many practical

  20. Planar Parametrization in Isogeometric Analysis

    DEFF Research Database (Denmark)

    Gravesen, Jens; Evgrafov, Anton; Nguyen, Dang-Manh

    2012-01-01

    Before isogeometric analysis can be applied to solving a partial differential equation posed over some physical domain, one needs to construct a valid parametrization of the geometry. The accuracy of the analysis is affected by the quality of the parametrization. The challenge of computing...... and maintaining a valid geometry parametrization is particularly relevant in applications of isogemetric analysis to shape optimization, where the geometry varies from one optimization iteration to another. We propose a general framework for handling the geometry parametrization in isogeometric analysis and shape...... are suitable for our framework. The non-linear methods we consider are based on solving a constrained optimization problem numerically, and are divided into two classes, geometry-oriented methods and analysis-oriented methods. Their performance is illustrated through a few numerical examples....

  1. Sensitivity of Technical Efficiency Estimates to Estimation Methods: An Empirical Comparison of Parametric and Non-Parametric Approaches

    OpenAIRE

    de-Graft Acquah, Henry

    2014-01-01

    This paper highlights the sensitivity of technical efficiency estimates to estimation approaches using empirical data. Firm specific technical efficiency and mean technical efficiency are estimated using the non parametric Data Envelope Analysis (DEA) and the parametric Corrected Ordinary Least Squares (COLS) and Stochastic Frontier Analysis (SFA) approaches. Mean technical efficiency is found to be sensitive to the choice of estimation technique. Analysis of variance and Tukey’s test sugge...

  2. Parametric uncertainty and global sensitivity analysis in a model of the carotid bifurcation: Identification and ranking of most sensitive model parameters.

    Science.gov (United States)

    Gul, R; Bernhard, S

    2015-11-01

    In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. STATCAT, Statistical Analysis of Parametric and Non-Parametric Data

    International Nuclear Information System (INIS)

    David, Hugh

    1990-01-01

    1 - Description of program or function: A suite of 26 programs designed to facilitate the appropriate statistical analysis and data handling of parametric and non-parametric data, using classical and modern univariate and multivariate methods. 2 - Method of solution: Data is read entry by entry, using a choice of input formats, and the resultant data bank is checked for out-of- range, rare, extreme or missing data. The completed STATCAT data bank can be treated by a variety of descriptive and inferential statistical methods, and modified, using other standard programs as required

  4. Digital spectral analysis parametric, non-parametric and advanced methods

    CERN Document Server

    Castanié, Francis

    2013-01-01

    Digital Spectral Analysis provides a single source that offers complete coverage of the spectral analysis domain. This self-contained work includes details on advanced topics that are usually presented in scattered sources throughout the literature.The theoretical principles necessary for the understanding of spectral analysis are discussed in the first four chapters: fundamentals, digital signal processing, estimation in spectral analysis, and time-series models.An entire chapter is devoted to the non-parametric methods most widely used in industry.High resolution methods a

  5. Parametric Methods for Order Tracking Analysis

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm

    2017-01-01

    Order tracking analysis is often used to find the critical speeds at which structural resonances are excited by a rotating machine. Typically, order tracking analysis is performed via non-parametric methods. In this report, however, we demonstrate some of the advantages of using a parametric method...

  6. Phase Sensitive Amplification using Parametric Processes in Optical Fibers

    DEFF Research Database (Denmark)

    Kang, Ning

    . Further, phase sensitive parametric processes in a nano-engineered silicon waveguide have been measured experimentally for the first time. Numerical optimizations show that with reduced waveguide propagation loss and reduced carrier life time, larger signal phase sensitive extinction ratio is achievable......Phase sensitive amplification using the parametric processes in fiber has the potential of delivering high gain and broadband operation with ultralow noise. It is able to regenerate both amplitude and phase modulated signals, simultaneously, with the appropriate design. This thesis concerns...... types. The regeneration capability of PSAs on phase encoded signal in an optical link has been optimized. Flat-top phase sensitive profile has been synthesized. It is able to provide simultaneous amplitude and phase noise squeezing, with enhanced phase noise margin compared to conventional designs...

  7. Parametric systems analysis for ICF hybrid reactors

    International Nuclear Information System (INIS)

    Berwald, D.H.; Maniscalco, J.A.; Chapin, D.L.

    1981-01-01

    Parametric design and systems analysis for inertial confinement fusion-fission hybrids are presented. These results were generated as part of the Electric Power Research Institute (EPRI) sponsored Feasibility Assessment of Fusion-Fission Hybrids, using an Inertial Confinement Fusion (ICF) hybrid power plant design code developed in conjunction with the feasibility assessment. The SYMECON systems analysis code, developed by Westinghouse, was used to generate economic results for symbiotic electricity generation systems consisting of the hybrid and its client Light Water Reactors (LWRs). These results explore the entire fusion parameter space for uranium fast fission blanket hybrids, thorium fast fission blanket hybrids, and thorium suppressed fission blanket types are discussed, and system sensitivities to design uncertainties are explored

  8. Sensitivity analysis

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...

  9. Parametric Sensitivity Tests- European PEM Fuel Cell Stack Test Procedures

    DEFF Research Database (Denmark)

    Araya, Samuel Simon; Andreasen, Søren Juhl; Kær, Søren Knudsen

    2014-01-01

    performed based on test procedures proposed by a European project, Stack-Test. The sensitivity of a Nafion-based low temperature PEMFC stack’s performance to parametric changes was the main objective of the tests. Four crucial parameters for fuel cell operation were chosen; relative humidity, temperature......As fuel cells are increasingly commercialized for various applications, harmonized and industry-relevant test procedures are necessary to benchmark tests and to ensure comparability of stack performance results from different parties. This paper reports the results of parametric sensitivity tests......, pressure, and stoichiometry at varying current density. Furthermore, procedures for polarization curve recording were also tested both in ascending and descending current directions....

  10. Parametric analysis of ATM solar array.

    Science.gov (United States)

    Singh, B. K.; Adkisson, W. B.

    1973-01-01

    The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.

  11. Parametric statistical change point analysis

    CERN Document Server

    Chen, Jie

    2000-01-01

    This work is an in-depth study of the change point problem from a general point of view and a further examination of change point analysis of the most commonly used statistical models Change point problems are encountered in such disciplines as economics, finance, medicine, psychology, signal processing, and geology, to mention only several The exposition is clear and systematic, with a great deal of introductory material included Different models are presented in each chapter, including gamma and exponential models, rarely examined thus far in the literature Other models covered in detail are the multivariate normal, univariate normal, regression, and discrete models Extensive examples throughout the text emphasize key concepts and different methodologies are used, namely the likelihood ratio criterion, and the Bayesian and information criterion approaches A comprehensive bibliography and two indices complete the study

  12. Parametric systems analysis for tandem mirror hybrids

    International Nuclear Information System (INIS)

    Lee, J.D.; Chapin, D.L.; Chi, J.W.H.

    1980-09-01

    Fusion fission systems, consisting of fissile producing fusion hybrids combining a tandem mirror fusion driver with various blanket types and net fissile consuming LWR's, have been modeled and analyzed parametrically. Analysis to date indicates that hybrids can be competitive with mined uranium when U 3 O 8 cost is about 100 $/lb., adding less than 25% to present day cost of power from LWR's. Of the three blanket types considered, uranium fast fission (UFF), thorium fast fission (ThFF), and thorium fission supressed (ThFS), the ThFS blanket has a modest economic advantage under most conditions but has higher support ratios and potential safety advantages under all conditions

  13. Parametric study and global sensitivity analysis for co-pyrolysis of rape straw and waste tire via variance-based decomposition.

    Science.gov (United States)

    Xu, Li; Jiang, Yong; Qiu, Rong

    2018-01-01

    In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. SPM analysis of parametric (R)-[11C]PK11195 binding images: plasma input versus reference tissue parametric methods.

    Science.gov (United States)

    Schuitemaker, Alie; van Berckel, Bart N M; Kropholler, Marc A; Veltman, Dick J; Scheltens, Philip; Jonker, Cees; Lammertsma, Adriaan A; Boellaard, Ronald

    2007-05-01

    (R)-[11C]PK11195 has been used for quantifying cerebral microglial activation in vivo. In previous studies, both plasma input and reference tissue methods have been used, usually in combination with a region of interest (ROI) approach. Definition of ROIs, however, can be labourious and prone to interobserver variation. In addition, results are only obtained for predefined areas and (unexpected) signals in undefined areas may be missed. On the other hand, standard pharmacokinetic models are too sensitive to noise to calculate (R)-[11C]PK11195 binding on a voxel-by-voxel basis. Linearised versions of both plasma input and reference tissue models have been described, and these are more suitable for parametric imaging. The purpose of this study was to compare the performance of these plasma input and reference tissue parametric methods on the outcome of statistical parametric mapping (SPM) analysis of (R)-[11C]PK11195 binding. Dynamic (R)-[11C]PK11195 PET scans with arterial blood sampling were performed in 7 younger and 11 elderly healthy subjects. Parametric images of volume of distribution (Vd) and binding potential (BP) were generated using linearised versions of plasma input (Logan) and reference tissue (Reference Parametric Mapping) models. Images were compared at the group level using SPM with a two-sample t-test per voxel, both with and without proportional scaling. Parametric BP images without scaling provided the most sensitive framework for determining differences in (R)-[11C]PK11195 binding between younger and elderly subjects. Vd images could only demonstrate differences in (R)-[11C]PK11195 binding when analysed with proportional scaling due to intersubject variation in K1/k2 (blood-brain barrier transport and non-specific binding).

  15. Sensitivity and parametric evaluations of significant aspects of burnup credit for PWR spent fuel packages

    International Nuclear Information System (INIS)

    DeHart, M.D.

    1996-05-01

    Spent fuel transportation and storage cask designs based on a burnup credit approach must consider issues that are not relevant in casks designed under a fresh-fuel loading assumption. For example, the spent fuel composition must be adequately characterized and the criticality analysis model can be complicated by the need to consider axial burnup variations. Parametric analyses are needed to characterize the importance of fuel assembly and fuel cycle parameters on spent fuel composition and reactivity. Numerical models must be evaluated to determine the sensitivity of criticality safety calculations to modeling assumptions. The purpose of this report is to describe analyses and evaluations performed in order to demonstrate the effect physical parameters and modeling assumptions have on the criticality analysis of spent fuel. The analyses in this report include determination and ranking of the most important actinides and fission products; study of the effect of various depletion scenarios on subsequent criticality calculations; establishment of trends in neutron multiplication as a function of fuel enrichment, burnup, cooling time- and a parametric and modeling evaluation of three-dimensional effects (e.g., axially varying burnup and temperature/density effects) in a conceptual cask design. The sensitivity and parametric evaluations were performed with the consideration of two different burnup credit approaches: (1) only actinides in the fuel are considered in the criticality analysis, and (2) both actinides and fission products are considered. Calculations described in this report were performed using the criticality and depletion sequences available in the SCALE code system and the SCALE 27-group burnup library. Although the results described herein do not constitute a validation of SCALE for use in spent fuel analysis, independent validation efforts have been completed and are described in other reports

  16. Sensitivity and parametric evaluations of significant aspects of burnup credit for PWR spent fuel packages

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, M.D.

    1996-05-01

    Spent fuel transportation and storage cask designs based on a burnup credit approach must consider issues that are not relevant in casks designed under a fresh-fuel loading assumption. For example, the spent fuel composition must be adequately characterized and the criticality analysis model can be complicated by the need to consider axial burnup variations. Parametric analyses are needed to characterize the importance of fuel assembly and fuel cycle parameters on spent fuel composition and reactivity. Numerical models must be evaluated to determine the sensitivity of criticality safety calculations to modeling assumptions. The purpose of this report is to describe analyses and evaluations performed in order to demonstrate the effect physical parameters and modeling assumptions have on the criticality analysis of spent fuel. The analyses in this report include determination and ranking of the most important actinides and fission products; study of the effect of various depletion scenarios on subsequent criticality calculations; establishment of trends in neutron multiplication as a function of fuel enrichment, burnup, cooling time- and a parametric and modeling evaluation of three-dimensional effects (e.g., axially varying burnup and temperature/density effects) in a conceptual cask design. The sensitivity and parametric evaluations were performed with the consideration of two different burnup credit approaches: (1) only actinides in the fuel are considered in the criticality analysis, and (2) both actinides and fission products are considered. Calculations described in this report were performed using the criticality and depletion sequences available in the SCALE code system and the SCALE 27-group burnup library. Although the results described herein do not constitute a validation of SCALE for use in spent fuel analysis, independent validation efforts have been completed and are described in other reports.

  17. Parametric Analysis of Flexible Logic Control Model

    Directory of Open Access Journals (Sweden)

    Lihua Fu

    2013-01-01

    Full Text Available Based on deep analysis about the essential relation between two input variables of normal two-dimensional fuzzy controller, we used universal combinatorial operation model to describe the logic relationship and gave a flexible logic control method to realize the effective control for complex system. In practical control application, how to determine the general correlation coefficient of flexible logic control model is a problem for further studies. First, the conventional universal combinatorial operation model has been limited in the interval [0,1]. Consequently, this paper studies a kind of universal combinatorial operation model based on the interval [a,b]. And some important theorems are given and proved, which provide a foundation for the flexible logic control method. For dealing reasonably with the complex relations of every factor in complex system, a kind of universal combinatorial operation model with unequal weights is put forward. Then, this paper has carried out the parametric analysis of flexible logic control model. And some research results have been given, which have important directive to determine the values of the general correlation coefficients in practical control application.

  18. Stability analysis of fuzzy parametric uncertain systems.

    Science.gov (United States)

    Bhiwani, R J; Patre, B M

    2011-10-01

    In this paper, the determination of stability margin, gain and phase margin aspects of fuzzy parametric uncertain systems are dealt. The stability analysis of uncertain linear systems with coefficients described by fuzzy functions is studied. A complexity reduced technique for determining the stability margin for FPUS is proposed. The method suggested is dependent on the order of the characteristic polynomial. In order to find the stability margin of interval polynomials of order less than 5, it is not always necessary to determine and check all four Kharitonov's polynomials. It has been shown that, for determining stability margin of FPUS of order five, four, and three we require only 3, 2, and 1 Kharitonov's polynomials respectively. Only for sixth and higher order polynomials, a complete set of Kharitonov's polynomials are needed to determine the stability margin. Thus for lower order systems, the calculations are reduced to a large extent. This idea has been extended to determine the stability margin of fuzzy interval polynomials. It is also shown that the gain and phase margin of FPUS can be determined analytically without using graphical techniques. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Superconducting microwave cavity parametric converter transducer sensitive to 10-19 M harmonic motion

    International Nuclear Information System (INIS)

    Reece, C.E.

    1984-01-01

    Toward the development of a transducer suitable for the detection of high frequency gravitational effects, a superconducting microwave coupled-cavity parametric converter transducer has been analyzed, developed and tested. An analysis is presented of the intermodal parametric conversion which is produced by harmonic perturbaton of the length of a 10 GHz TE 011 mode cylindrical resonant cavity. The converter is examined as a transducer of displacement with harmonic frequency near the intermodal difference frequency. Transducer sensitivity dependence upon cavity tunings, couplings, and Q-factors is analyzed and experimentally tested with excellent agreement. The transducer consists of two identical coupled TE 011 niobium cavities with one endwall driven into mechanical oscillation by an externally mounted piezoelectric ceramic. A displacement with effective amplitude (3.7 +/- 1.3) x 10 -19 m and frequency 1.13 MHz has been observed by detecting a 10GHz conversion power of 10 -21 watts. This measurement was obtained with 0.12 mJ stored in a cavity resonance with an unloaded Q-factor of 6.7 x 10 8 at 1.55 0 K. The applications of this device in the detection of high frequency gravitational effects are also discussed. Finally, the prospects for improvement of transducer sensitivity and the ultimate limitations are presented

  20. Semi-parametrical NAA method for paper analysis

    International Nuclear Information System (INIS)

    Medeiros, Ilca M.M.A.; Zamboni, Cibele B.; Cruz, Manuel T.F. da; Morel, Jose C.O.; Park, Song W.

    2007-01-01

    The semi-parametric Neutron Activation Analysis technique, using Au as flux monitor, was applied to determine element concentrations in white paper, usually commercialized, aiming to check the quality control of its production in industrial process. (author)

  1. Sensitivity analysis of a greedy heuristic for knapsack problems

    NARCIS (Netherlands)

    Ghosh, D; Chakravarti, N; Sierksma, G

    2006-01-01

    In this paper, we carry out parametric analysis as well as a tolerance limit based sensitivity analysis of a greedy heuristic for two knapsack problems-the 0-1 knapsack problem and the subset sum problem. We carry out the parametric analysis based on all problem parameters. In the tolerance limit

  2. Using non-parametric methods in econometric production analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    2012-01-01

    by investigating the relationship between the elasticity of scale and the farm size. We use a balanced panel data set of 371~specialised crop farms for the years 2004-2007. A non-parametric specification test shows that neither the Cobb-Douglas function nor the Translog function are consistent with the "true......Econometric estimation of production functions is one of the most common methods in applied economic production analysis. These studies usually apply parametric estimation techniques, which obligate the researcher to specify a functional form of the production function of which the Cobb...... parameter estimates, but also in biased measures which are derived from the parameters, such as elasticities. Therefore, we propose to use non-parametric econometric methods. First, these can be applied to verify the functional form used in parametric production analysis. Second, they can be directly used...

  3. Parametric inference for biological sequence analysis.

    Science.gov (United States)

    Pachter, Lior; Sturmfels, Bernd

    2004-11-16

    One of the major successes in computational biology has been the unification, by using the graphical model formalism, of a multitude of algorithms for annotating and comparing biological sequences. Graphical models that have been applied to these problems include hidden Markov models for annotation, tree models for phylogenetics, and pair hidden Markov models for alignment. A single algorithm, the sum-product algorithm, solves many of the inference problems that are associated with different statistical models. This article introduces the polytope propagation algorithm for computing the Newton polytope of an observation from a graphical model. This algorithm is a geometric version of the sum-product algorithm and is used to analyze the parametric behavior of maximum a posteriori inference calculations for graphical models.

  4. Parametric analysis of a magnetized cylindrical plasma

    International Nuclear Information System (INIS)

    Ahedo, Eduardo

    2009-01-01

    The relevant macroscopic model, the spatial structure, and the parametric regimes of a low-pressure plasma confined by a cylinder and an axial magnetic field is discussed for the small-Debye length limit, making use of asymptotic techniques. The plasma response is fully characterized by three-dimensionless parameters, related to the electron gyroradius, and the electron and ion collision mean-free-paths. There are the unmagnetized regime, the main magnetized regime, and, for a low electron-collisionality plasma, an intermediate-magnetization regime. In the magnetized regimes, electron azimuthal inertia is shown to be a dominant phenomenon in part of the quasineutral plasma region and to set up before ion radial inertia. In the main magnetized regime, the plasma structure consists of a bulk diffusive region, a thin layer governed by electron inertia, a thinner sublayer controlled by ion inertia, and the non-neutral Debye sheath. The solution of the main inertial layer yields that the electron azimuthal energy near the wall is larger than the electron thermal energy, making electron resistivity effects non-negligible. The electron Boltzmann relation is satisfied only in the very vicinity of the Debye sheath edge. Ion collisionality effects are irrelevant in the magnetized regime. Simple scaling laws for plasma production and particle and energy fluxes to the wall are derived.

  5. Non-parametric analysis of production efficiency of poultry egg ...

    African Journals Online (AJOL)

    Non-parametric analysis of production efficiency of poultry egg farmers in Delta ... analysis of factors affecting the output of poultry farmers showed that stock ... should be put in place for farmers to learn the best farm practices carried out on the ...

  6. Parametric Resonance in the Early Universe - A Fitting Analysis

    CERN Document Server

    Figueroa, Daniel G.

    2017-02-01

    Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in $3+1$ dimensions, we parametrise the dynamics' outcome scanning over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasise the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequ...

  7. A generalized parametric response mapping method for analysis of multi-parametric imaging: A feasibility study with application to glioblastoma.

    Science.gov (United States)

    Lausch, Anthony; Yeung, Timothy Pok-Chi; Chen, Jeff; Law, Elton; Wang, Yong; Urbini, Benedetta; Donelli, Filippo; Manco, Luigi; Fainardi, Enrico; Lee, Ting-Yim; Wong, Eugene

    2017-11-01

    -enhancing lesion (CEL) and a 1 cm shell of surrounding peri-tumoral tissue were performed. Prediction using tumor volume metrics was also investigated. Leave-one-out cross validation (LOOCV) was used in combination with permutation testing to assess preliminary predictive efficacy and estimate statistically robust P-values. The predictive endpoint was overall survival (OS) greater than or equal to the median OS of 18.2 months. Single-parameter PRM and multi-parametric response maps (MPRMs) were generated for each patient and used to predict OS via the LOOCV. Tumor volume metrics (P ≥ 0.071 ± 0.01) and single-parameter PRM analyses (P ≥ 0.170 ± 0.01) were not found to be predictive of OS within this study. MPRM analysis of the peri-tumoral region but not the CEL was found to be predictive of OS with a classification sensitivity, specificity and accuracy of 80%, 100%, and 89%, respectively (P = 0.001 ± 0.01). The feasibility of a generalized MPRM analysis framework was demonstrated with improved prediction of overall survival compared to the original single-parameter method when applied to a glioblastoma dataset. The proposed algorithm takes the spatial heterogeneity in multi-parametric response into consideration and enables visualization. MPRM analysis of peri-tumoral regions was shown to have predictive potential supporting further investigation of a larger glioblastoma dataset. © 2017 American Association of Physicists in Medicine.

  8. Using non-parametric methods in econometric production analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    Econometric estimation of production functions is one of the most common methods in applied economic production analysis. These studies usually apply parametric estimation techniques, which obligate the researcher to specify the functional form of the production function. Most often, the Cobb...... results—including measures that are of interest of applied economists, such as elasticities. Therefore, we propose to use nonparametric econometric methods. First, they can be applied to verify the functional form used in parametric estimations of production functions. Second, they can be directly used...

  9. ITER parametric analysis and operational performance

    International Nuclear Information System (INIS)

    Perkins, L.J.; Spears, W.R.; Galambos, J.D.

    1991-01-01

    One of the key components of the ITER Conceptual Design Activities (CDA) is the determination of optimum design, investigation of operation in various modes, recommendation of baseline performance specifications, studies of sensitivity of ITER design to uncertainties in physics, investigation of operational flexibility, assessment of alternative designs, and determination of implications for extrapolation to prospective DEMO reactors. These terms of reference are reported in this document. Refs, figs and tabs

  10. Containment parametric analysis for loss of coolant accident

    International Nuclear Information System (INIS)

    Fabjan, L.

    1985-01-01

    Full text: This paper presents parametric analysis of double containment response to LOCA using CONTEMPT-LT/28 code. The influence of the active and passive heat sinks on thermodynamic parameters in the containment after big and small LOCA was considered. (author)

  11. Parametric Sensitivity Tests—European Polymer Electrolyte Membrane Fuel Cell Stack Test Procedures

    DEFF Research Database (Denmark)

    Araya, Samuel Simon; Andreasen, Søren Juhl; Kær, Søren Knudsen

    2014-01-01

    performed based on test procedures proposed by a European project, Stack-Test. The sensitivity of a Nafion-based low temperature PEMFC stack’s performance to parametric changes was the main objective of the tests. Four crucial parameters for fuel cell operation were chosen; relative humidity, temperature......As fuel cells are increasingly commercialized for various applications, harmonized and industry-relevant test procedures are necessary to benchmark tests and to ensure comparability of stack performance results from different parties. This paper reports the results of parametric sensitivity tests......, pressure, and stoichiometry at varying current density. Furthermore, procedures for polarization curve recording were also tested both in ascending and descending current directions....

  12. Low Parametric Sensitivity Realizations with relaxed L2-dynamic-range-scaling constraints

    OpenAIRE

    Hilaire , Thibault

    2009-01-01

    This paper presents a new dynamic-range scaling for the implementation of filters/controllers in state-space form. Relaxing the classical L2-scaling constraints by specific fixed-point considerations allows for a higher degree of freedom for the optimal L2-parametric sensitivity problem. However, overflows in the implementation are still prevented. The underlying constrained problem is converted into an unconstrained problem for which a solution can be provided. This leads to realizations whi...

  13. Multi-level approach for parametric roll analysis

    Science.gov (United States)

    Kim, Taeyoung; Kim, Yonghwan

    2011-03-01

    The present study considers multi-level approach for the analysis of parametric roll phenomena. Three kinds of computation method, GM variation, impulse response function (IRF), and Rankine panel method, are applied for the multi-level approach. IRF and Rankine panel method are based on the weakly nonlinear formulation which includes nonlinear Froude- Krylov and restoring forces. In the computation result of parametric roll occurrence test in regular waves, IRF and Rankine panel method show similar tendency. Although the GM variation approach predicts the occurrence of parametric roll at twice roll natural frequency, its frequency criteria shows a little difference. Nonlinear roll motion in bichromatic wave is also considered in this study. To prove the unstable roll motion in bichromatic waves, theoretical and numerical approaches are applied. The occurrence of parametric roll is theoretically examined by introducing the quasi-periodic Mathieu equation. Instability criteria are well predicted from stability analysis in theoretical approach. From the Fourier analysis, it has been verified that difference-frequency effects create the unstable roll motion. The occurrence of unstable roll motion in bichromatic wave is also observed in the experiment.

  14. Parametric resonance in the early Universe—a fitting analysis

    Energy Technology Data Exchange (ETDEWEB)

    Figueroa, Daniel G. [Theoretical Physics Department, CERN, Geneva (Switzerland); Torrentí, Francisco, E-mail: daniel.figueroa@cern.ch, E-mail: f.torrenti@csic.es [Instituto de Física Teórica IFT-UAM/CSIC, Universidad Autónoma de Madrid, Cantoblanco 28049, Madrid (Spain)

    2017-02-01

    Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in 3+1 dimensions, we parametrize the dynamics' outcome scanning over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasize the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequent non-linear dynamics. We provide simple fits to the relevant time scales and particle energy fractions at each stage. Our fits can be applied to post-inflationary preheating scenarios, where the oscillatory field is the inflaton, or to spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.

  15. Parametric resonance in the early Universe—a fitting analysis

    International Nuclear Information System (INIS)

    Figueroa, Daniel G.; Torrentí, Francisco

    2017-01-01

    Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in 3+1 dimensions, we parametrize the dynamics' outcome scanning over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasize the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequent non-linear dynamics. We provide simple fits to the relevant time scales and particle energy fractions at each stage. Our fits can be applied to post-inflationary preheating scenarios, where the oscillatory field is the inflaton, or to spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.

  16. Sensitivity and uncertainty analysis

    CERN Document Server

    Cacuci, Dan G; Navon, Ionel Michael

    2005-01-01

    As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c

  17. Efficiency Analysis of German Electricity Distribution Utilities : Non-Parametric and Parametric Tests

    OpenAIRE

    von Hirschhausen, Christian R.; Cullmann, Astrid

    2005-01-01

    Abstract This paper applies parametric and non-parametric and parametric tests to assess the efficiency of electricity distribution companies in Germany. We address traditional issues in electricity sector benchmarking, such as the role of scale effects and optimal utility size, as well as new evidence specific to the situation in Germany. We use labour, capital, and peak load capacity as inputs, and units sold and the number of customers as output. The data cover 307 (out of 553) ...

  18. Assessing scenario and parametric uncertainties in risk analysis: a model uncertainty audit

    International Nuclear Information System (INIS)

    Tarantola, S.; Saltelli, A.; Draper, D.

    1999-01-01

    In the present study a process of model audit is addressed on a computational model used for predicting maximum radiological doses to humans in the field of nuclear waste disposal. Global uncertainty and sensitivity analyses are employed to assess output uncertainty and to quantify the contribution of parametric and scenario uncertainties to the model output. These tools are of fundamental importance for risk analysis and decision making purposes

  19. Yadage and Packtivity - analysis preservation using parametrized workflows

    Science.gov (United States)

    Cranmer, Kyle; Heinrich, Lukas

    2017-10-01

    Preserving data analyses produced by the collaborations at LHC in a parametrized fashion is crucial in order to maintain reproducibility and re-usability. We argue for a declarative description in terms of individual processing steps - “packtivities” - linked through a dynamic directed acyclic graph (DAG) and present an initial set of JSON schemas for such a description and an implementation - “yadage” - capable of executing workflows of analysis preserved via Linux containers.

  20. Validation and sensitivity tests on improved parametrizations of a land surface process model (LSPM) in the Po Valley

    International Nuclear Information System (INIS)

    Cassardo, C.; Carena, E.; Longhetto, A.

    1998-01-01

    The Land Surface Process Model (LSPM) has been improved with respect to the 1. version of 1994. The modifications have involved the parametrizations of the radiation terms and of turbulent heat fluxes. A parametrization of runoff has also been developed, in order to close the hydrologic balance. This 2. version of LSPM has been validated against experimental data gathered at Mottarone (Verbania, Northern Italy) during a field experiment. The results of this validation show that this new version is able to apportionate the energy into sensible and latent heat fluxes. LSPM has also been submitted to a series of sensitivity tests in order to investigate the hydrological part of the model. The physical quantities selected in these sensitivity experiments have been the initial soil moisture content and the rainfall intensity. In each experiment, the model has been forced by using the observations carried out at the synoptic stations of San Pietro Capofiume (Po Valley, Italy). The observed characteristics of soil and vegetation (not involved in the sensitivity tests) have been used as initial and boundary conditions. The results of the simulation show that LSPM can reproduce well the energy, heat and water budgets and their behaviours with varying the selected parameters. A careful analysis of the LSPM output shows also the importance to identify the effective soil type

  1. Sensitivity enhancement of remotely coupled NMR detectors using wirelessly powered parametric amplification.

    Science.gov (United States)

    Qian, Chunqi; Murphy-Boesch, Joseph; Dodd, Stephen; Koretsky, Alan

    2012-09-01

    A completely wireless detection coil with an integrated parametric amplifier has been constructed to provide local amplification and transmission of MR signals. The sample coil is one element of a parametric amplifier using a zero-bias diode that mixes the weak MR signal with a strong pump signal that is obtained from an inductively coupled external loop. The NMR sample coil develops current gain via reduction in the effective coil resistance. Higher gain can be obtained by adjusting the level of the pumping power closer to the oscillation threshold, but the gain is ultimately constrained by the bandwidth requirement of MRI experiments. A feasibility study here shows that on a NaCl/D(2) O phantom, (23) Na signals with 20 dB of gain can be readily obtained with a concomitant bandwidth of 144 kHz. This gain is high enough that the integrated coil with parametric amplifier, which is coupled inductively to external loops, can provide sensitivity approaching that of direct wire connection. Copyright © 2012 Wiley Periodicals, Inc.

  2. Fourier analysis of the parametric resonance in neutrino oscillations

    International Nuclear Information System (INIS)

    Koike, Masafumi; Ota, Toshihiko; Saito, Masako; Sato, Joe

    2009-01-01

    Parametric enhancement of the appearance probability of the neutrino oscillation under the inhomogeneous matter is studied. Fourier expansion of the matter density profile leads to a simple resonance condition and manifests that each Fourier mode modifies the energy spectrum of oscillation probability at around the corresponding energy; below the MSW resonance energy, a large-scale variation modifies the spectrum in high energies while a small-scale one does in low energies. In contrast to the simple parametric resonance, the enhancement of the oscillation probability is itself an slow oscillation as demonstrated by a numerical analysis with a single Fourier mode of the matter density. We derive an analytic solution to the evolution equation on the resonance energy, including the expression of frequency of the slow oscillation.

  3. Pluripotency gene network dynamics: System views from parametric analysis.

    Science.gov (United States)

    Akberdin, Ilya R; Omelyanchuk, Nadezda A; Fadeev, Stanislav I; Leskova, Natalya E; Oschepkova, Evgeniya A; Kazantsev, Fedor V; Matushkin, Yury G; Afonnikov, Dmitry A; Kolchanov, Nikolay A

    2018-01-01

    Multiple experimental data demonstrated that the core gene network orchestrating self-renewal and differentiation of mouse embryonic stem cells involves activity of Oct4, Sox2 and Nanog genes by means of a number of positive feedback loops among them. However, recent studies indicated that the architecture of the core gene network should also incorporate negative Nanog autoregulation and might not include positive feedbacks from Nanog to Oct4 and Sox2. Thorough parametric analysis of the mathematical model based on this revisited core regulatory circuit identified that there are substantial changes in model dynamics occurred depending on the strength of Oct4 and Sox2 activation and molecular complexity of Nanog autorepression. The analysis showed the existence of four dynamical domains with different numbers of stable and unstable steady states. We hypothesize that these domains can constitute the checkpoints in a developmental progression from naïve to primed pluripotency and vice versa. During this transition, parametric conditions exist, which generate an oscillatory behavior of the system explaining heterogeneity in expression of pluripotent and differentiation factors in serum ESC cultures. Eventually, simulations showed that addition of positive feedbacks from Nanog to Oct4 and Sox2 leads mainly to increase of the parametric space for the naïve ESC state, in which pluripotency factors are strongly expressed while differentiation ones are repressed.

  4. Classicalization times of parametrically amplified 'Schroedinger cat' states coupled to phase-sensitive reservoirs

    International Nuclear Information System (INIS)

    Dodonov, V.V.; Valverde, C.; Souza, L.S.; Baseia, B.

    2011-01-01

    The exact Wigner function of a parametrically excited quantum oscillator in a phase-sensitive amplifying/attenuating reservoir is found for initial even/odd coherent states. Studying the evolution of negativity of the Wigner function we show the difference between the 'initial positivization time' (IPT), which is inversely proportional to the square of the initial size of the superposition, and the 'final positivization time' (FPT), which does not depend on this size. Both these times can be made arbitrarily long in maximally squeezed high-temperature reservoirs. Besides, we find the conditions when some (small) squeezing can exist even after the Wigner function becomes totally positive. -- Highlights: → We study parametric excitation of a quantum oscillator in phase-sensitive baths. → Exact time-dependent Wigner function for initial even/odd coherent states is found. → The evolution of negativity of Wigner function is compared with the squeezing dynamics. → The difference between initial and final 'classicalization times' is emphasized. → Both these times can be arbitrarily long for rigged reservoirs at infinite temperature.

  5. An examination of the parametric properties of four noise sensitivity measures

    DEFF Research Database (Denmark)

    van Kamp, Irene; Ellermeier, Wolfgang; Lopez-Barrio, Isabel

    2006-01-01

    Noise sensitivity (NS) is a personality trait with a strong influence on reactions to noise. Studies of reaction should include a standard measure of NS that is founded on a theoretically justified definition of NS, and examination of existing NS measures' parametric properties (internal consiste......, demographics and lifestyle). A standard NS measure should demonstrate high reliability, and should predict responses to noise. Discussion is welcomed and will focus on validation strategies and optimizing the study design.......Noise sensitivity (NS) is a personality trait with a strong influence on reactions to noise. Studies of reaction should include a standard measure of NS that is founded on a theoretically justified definition of NS, and examination of existing NS measures' parametric properties (internal...... consistency; stability; convergent and predictive validity). At each of 6 laboratory centres (Aalborg; London; Sydney; Dortmund; Madrid, Amsterdam), participants will complete four NS measures on each of two occasions. In one occasion, participants will complete a task while exposed to recorded aircraft noise...

  6. TOLERANCE SENSITIVITY ANALYSIS: THIRTY YEARS LATER

    Directory of Open Access Journals (Sweden)

    Richard E. Wendell

    2010-12-01

    Full Text Available Tolerance sensitivity analysis was conceived in 1980 as a pragmatic approach to effectively characterize a parametric region over which objective function coefficients and right-hand-side terms in linear programming could vary simultaneously and independently while maintaining the same optimal basis. As originally proposed, the tolerance region corresponds to the maximum percentage by which coefficients or terms could vary from their estimated values. Over the last thirty years the original results have been extended in a number of ways and applied in a variety of applications. This paper is a critical review of tolerance sensitivity analysis, including extensions and applications.

  7. Uncertainty importance analysis using parametric moment ratio functions.

    Science.gov (United States)

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2014-02-01

    This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.

  8. WHAT IF (Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Iulian N. BUJOREANU

    2011-01-01

    Full Text Available Sensitivity analysis represents such a well known and deeply analyzed subject that anyone to enter the field feels like not being able to add anything new. Still, there are so many facets to be taken into consideration.The paper introduces the reader to the various ways sensitivity analysis is implemented and the reasons for which it has to be implemented in most analyses in the decision making processes. Risk analysis is of outmost importance in dealing with resource allocation and is presented at the beginning of the paper as the initial cause to implement sensitivity analysis. Different views and approaches are added during the discussion about sensitivity analysis so that the reader develops an as thoroughly as possible opinion on the use and UTILITY of the sensitivity analysis. Finally, a round-up conclusion brings us to the question of the possibility of generating the future and analyzing it before it unfolds so that, when it happens it brings less uncertainty.

  9. Parametric image reconstruction using spectral analysis of PET projection data

    International Nuclear Information System (INIS)

    Meikle, Steven R.; Matthews, Julian C.; Cunningham, Vincent J.; Bailey, Dale L.; Livieratos, Lefteris; Jones, Terry; Price, Pat

    1998-01-01

    Spectral analysis is a general modelling approach that enables calculation of parametric images from reconstructed tracer kinetic data independent of an assumed compartmental structure. We investigated the validity of applying spectral analysis directly to projection data motivated by the advantages that: (i) the number of reconstructions is reduced by an order of magnitude and (ii) iterative reconstruction becomes practical which may improve signal-to-noise ratio (SNR). A dynamic software phantom with typical 2-[ 11 C]thymidine kinetics was used to compare projection-based and image-based methods and to assess bias-variance trade-offs using iterative expectation maximization (EM) reconstruction. We found that the two approaches are not exactly equivalent due to properties of the non-negative least-squares algorithm. However, the differences are small ( 1 and, to a lesser extent, VD). The optimal number of EM iterations was 15-30 with up to a two-fold improvement in SNR over filtered back projection. We conclude that projection-based spectral analysis with EM reconstruction yields accurate parametric images with high SNR and has potential application to a wide range of positron emission tomography ligands. (author)

  10. Sensitivity Analysis Without Assumptions.

    Science.gov (United States)

    Ding, Peng; VanderWeele, Tyler J

    2016-05-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.

  11. Parametric analysis of temperature gradient across thermoelectric power generators

    Directory of Open Access Journals (Sweden)

    Khaled Chahine

    2016-06-01

    Full Text Available This paper presents a parametric analysis of power generation from thermoelectric generators (TEGs. The aim of the parametric analysis is to provide recommendations with respect to the applications of TEGs. To proceed, the one-dimensional steady-state solution of the heat diffusion equation is considered with various boundary conditions representing real encountered cases. Four configurations are tested. The first configuration corresponds to the TEG heated with constant temperature at its lower surface and cooled with a fluid at its upper surface. The second configuration corresponds to the TEG heated with constant heat flux at its lower surface and cooled with a fluid at its upper surface. The third configuration corresponds to the TEG heated with constant heat flux at its lower surface and cooled by a constant temperature at its upper surface. The fourth configuration corresponds to the TEG heated by a fluid at its lower surface and cooled by a fluid at its upper surface. It was shown that the most promising configuration is the fourth one and temperature differences up to 70˚C can be achieved at 150˚C heat source. Finally, a new concept is implemented based on configuration four and tested experimentally.

  12. Thermodynamic model and parametric analysis of a tubular SOFC module

    Science.gov (United States)

    Campanari, Stefano

    Solid oxide fuel cells (SOFCs) have been considered in the last years as one of the most promising technologies for very high-efficiency electric energy generation from natural gas, both with simple fuel cell plants and with integrated gas turbine-fuel cell systems. Among the SOFC technologies, tubular SOFC stacks with internal reforming have emerged as one of the most mature technology, with a serious potential for a future commercialization. In this paper, a thermodynamic model of a tubular SOFC stack, with natural gas feeding, internal reforming of hydrocarbons and internal air preheating is proposed. In the first section of the paper, the model is discussed in detail, analyzing its calculating equations and tracing its logical steps; the model is then calibrated on the available data for a recently demonstrated tubular SOFC prototype plant. In the second section of the paper, it is carried out a detailed parametric analysis of the stack working conditions, as a function of the main operating parameters. The discussion of the results of the thermodynamic and parametric analysis yields interesting considerations about partial load SOFC operation and load regulation, and about system design and integration with gas turbine cycles.

  13. Experimental implementation of a nonlinear beamsplitter based on a phase-sensitive parametric amplifier

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Yami; Feng, Jingliang; Cao, Leiming; Wang, Yaxian; Jing, Jietai, E-mail: jtjing@phy.ecnu.edu.cn [State Key Laboratory of Precision Spectroscopy, East China Normal University, Shanghai 200062 (China)

    2016-03-28

    Beamsplitters have played an important role in quantum optics experiments. They are often used to split and combine two beams, especially in the construct of an interferometer. In this letter, we experimentally implement a nonlinear beamsplitter using a phase-sensitive parametric amplifier, which is based on four-wave mixing in hot rubidium vapor. Here we show that, despite the different frequencies of the two input beams, the output ports of the nonlinear beamsplitter exhibit interference phenomena. We make measurements of the interference fringe visibility and study how various parameters, such as the intensity gain of the amplifier, the intensity ratio of the two input beams, and the one and two photon detunings, affect the behavior of the nonlinear beamsplitter. It may find potential applications in quantum metrology and quantum information processing.

  14. Multi-Directional Non-Parametric Analysis of Agricultural Efficiency

    DEFF Research Database (Denmark)

    Balezentis, Tomas

    This thesis seeks to develop methodologies for assessment of agricultural efficiency and employ them to Lithuanian family farms. In particular, we focus on three particular objectives throughout the research: (i) to perform a fully non-parametric analysis of efficiency effects, (ii) to extend...... to the Multi-Directional Efficiency Analysis approach when the proposed models were employed to analyse empirical data of Lithuanian family farm performance, we saw substantial differences in efficiencies associated with different inputs. In particular, assets appeared to be the least efficiently used input...... relative to labour, intermediate consumption and land (in some cases land was not treated as a discretionary input). These findings call for further research on relationships among financial structure, investment decisions, and efficiency in Lithuanian family farms. Application of different techniques...

  15. Parametric thermal analysis of 75 MHz heavy ion RFQ

    International Nuclear Information System (INIS)

    Mishra, N.K.; Mehrotra, N.; Verma, V.; Gupta, A.K.; Bhagwat, P.V.

    2015-01-01

    An ECR based Heavy Ion Accelerator comprising of a superconducting Electron Cyclotron Resonance (ECR) Ion Source, normal conducting RFQ (Radio Frequency Quadrupole) and superconducting Niobium resonators is being developed at BARC under XII plan. A state-of-the-art 18 GHz superconducting ECR ion source (PK-ISIS) jointly configured with Pantechnik, France is operational at Van-de-Graaff, BARC. The electromagnetic design of the improved version of 75 MHz heavy ion RFQ has been reported earlier. The previous thermal study of 51 cm RFQ model showed large temperature variation axially along the vane tip. A new coolant flow scheme has been worked out to optimize the axial temperature gradient. In this paper the thermal analysis including parametric study of coolant flow rates and inlet temperature variation will be presented. (author)

  16. Simulation, optimal control and parametric sensitivity analysis of a molten carbonate fuel cell using a partial differential algebraic dynamic equation system; Simulation, Optimale Steuerung und Sensitivitaetsanalyse einer Schmelzkarbonat-Brennstoffzelle mithilfe eines partiellen differential-algebraischen dynamischen Gleichungssystems

    Energy Technology Data Exchange (ETDEWEB)

    Sternberg, K

    2007-02-08

    Molten carbonate fuel cells (MCFCs) allow an efficient and environmentally friendly energy production by converting the chemical energy contained in the fuel gas in virtue of electro-chemical reactions. In order to predict the effect of the electro-chemical reactions and to control the dynamical behavior of the fuel cell a mathematical model has to be found. The molten carbonate fuel cell (MCFC) can indeed be described by a highly complex,large scale, semi-linear system of partial differential algebraic equations. This system includes a reaction-diffusion-equation of parabolic type, several reaction-transport-equations of hyperbolic type, several ordinary differential equations and finally a system of integro-differential algebraic equations which describes the nonlinear non-standard boundary conditions for the entire partial differential algebraic equation system (PDAE-system). The existence of an analytical or the computability of a numerical solution for this high-dimensional PDAE-system depends on the kind of the differential equations and their special characteristics. Apart from theoretical investigations, the real process has to be controlled, more precisely optimally controlled. Hence, on the basis of the PDAE-system an optimal control problem is set up, whose analytical and numerical solvability is closely linked to the solvability of the PDAE-system. Moreover the solution of that optimal control problem is made more difficult by inaccuracies in the underlying database, which does not supply sufficiently accurate values for the model parameters. Therefore the optimal control problem must also be investigated with respect to small disturbances of model parameters. The aim of this work is to analyze the relevant dynamic behavior of MCFCs and to develop concepts for their optimal process control. Therefore this work is concerned with the simulation, the optimal control and the sensitivity analysis of a mathematical model for MCDCs, which can be characterized

  17. Interference and Sensitivity Analysis.

    Science.gov (United States)

    VanderWeele, Tyler J; Tchetgen Tchetgen, Eric J; Halloran, M Elizabeth

    2014-11-01

    Causal inference with interference is a rapidly growing area. The literature has begun to relax the "no-interference" assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted.

  18. Worst-case Throughput Analysis for Parametric Rate and Parametric Actor Execution Time Scenario-Aware Dataflow Graphs

    Directory of Open Access Journals (Sweden)

    Mladen Skelin

    2014-03-01

    Full Text Available Scenario-aware dataflow (SADF is a prominent tool for modeling and analysis of dynamic embedded dataflow applications. In SADF the application is represented as a finite collection of synchronous dataflow (SDF graphs, each of which represents one possible application behaviour or scenario. A finite state machine (FSM specifies the possible orders of scenario occurrences. The SADF model renders the tightest possible performance guarantees, but is limited by its finiteness. This means that from a practical point of view, it can only handle dynamic dataflow applications that are characterized by a reasonably sized set of possible behaviours or scenarios. In this paper we remove this limitation for a class of SADF graphs by means of SADF model parametrization in terms of graph port rates and actor execution times. First, we formally define the semantics of the model relevant for throughput analysis based on (max,+ linear system theory and (max,+ automata. Second, by generalizing some of the existing results, we give the algorithms for worst-case throughput analysis of parametric rate and parametric actor execution time acyclic SADF graphs with a fully connected, possibly infinite state transition system. Third, we demonstrate our approach on a few realistic applications from digital signal processing (DSP domain mapped onto an embedded multi-processor architecture.

  19. Beyond sensitivity analysis

    DEFF Research Database (Denmark)

    Lund, Henrik; Sorknæs, Peter; Mathiesen, Brian Vad

    2018-01-01

    of electricity, which have been introduced in recent decades. These uncertainties pose a challenge to the design and assessment of future energy strategies and investments, especially in the economic assessment of renewable energy versus business-as-usual scenarios based on fossil fuels. From a methodological...... point of view, the typical way of handling this challenge has been to predict future prices as accurately as possible and then conduct a sensitivity analysis. This paper includes a historical analysis of such predictions, leading to the conclusion that they are almost always wrong. Not only...... are they wrong in their prediction of price levels, but also in the sense that they always seem to predict a smooth growth or decrease. This paper introduces a new method and reports the results of applying it on the case of energy scenarios for Denmark. The method implies the expectation of fluctuating fuel...

  20. Chemical kinetic functional sensitivity analysis: Elementary sensitivities

    International Nuclear Information System (INIS)

    Demiralp, M.; Rabitz, H.

    1981-01-01

    Sensitivity analysis is considered for kinetics problems defined in the space--time domain. This extends an earlier temporal Green's function method to handle calculations of elementary functional sensitivities deltau/sub i//deltaα/sub j/ where u/sub i/ is the ith species concentration and α/sub j/ is the jth system parameter. The system parameters include rate constants, diffusion coefficients, initial conditions, boundary conditions, or any other well-defined variables in the kinetic equations. These parameters are generally considered to be functions of position and/or time. Derivation of the governing equations for the sensitivities and the Green's funciton are presented. The physical interpretation of the Green's function and sensitivities is given along with a discussion of the relation of this work to earlier research

  1. Understanding dynamics using sensitivity analysis: caveat and solution

    Science.gov (United States)

    2011-01-01

    Background Parametric sensitivity analysis (PSA) has become one of the most commonly used tools in computational systems biology, in which the sensitivity coefficients are used to study the parametric dependence of biological models. As many of these models describe dynamical behaviour of biological systems, the PSA has subsequently been used to elucidate important cellular processes that regulate this dynamics. However, in this paper, we show that the PSA coefficients are not suitable in inferring the mechanisms by which dynamical behaviour arises and in fact it can even lead to incorrect conclusions. Results A careful interpretation of parametric perturbations used in the PSA is presented here to explain the issue of using this analysis in inferring dynamics. In short, the PSA coefficients quantify the integrated change in the system behaviour due to persistent parametric perturbations, and thus the dynamical information of when a parameter perturbation matters is lost. To get around this issue, we present a new sensitivity analysis based on impulse perturbations on system parameters, which is named impulse parametric sensitivity analysis (iPSA). The inability of PSA and the efficacy of iPSA in revealing mechanistic information of a dynamical system are illustrated using two examples involving switch activation. Conclusions The interpretation of the PSA coefficients of dynamical systems should take into account the persistent nature of parametric perturbations involved in the derivation of this analysis. The application of PSA to identify the controlling mechanism of dynamical behaviour can be misleading. By using impulse perturbations, introduced at different times, the iPSA provides the necessary information to understand how dynamics is achieved, i.e. which parameters are essential and when they become important. PMID:21406095

  2. Parametric Analysis of a Hypersonic Inlet using Computational Fluid Dynamics

    Science.gov (United States)

    Oliden, Daniel

    For CFD validation, hypersonic flow fields are simulated and compared with experimental data specifically designed to recreate conditions found by hypersonic vehicles. Simulated flow fields on a cone-ogive with flare at Mach 7.2 are compared with experimental data from NASA Ames Research Center 3.5" hypersonic wind tunnel. A parametric study of turbulence models is presented and concludes that the k-kl-omega transition and SST transition turbulence model have the best correlation. Downstream of the flare's shockwave, good correlation is found for all boundary layer profiles, with some slight discrepancies of the static temperature near the surface. Simulated flow fields on a blunt cone with flare above Mach 10 are compared with experimental data from CUBRC LENS hypervelocity shock tunnel. Lack of vibrational non-equilibrium calculations causes discrepancies in heat flux near the leading edge. Temperature profiles, where non-equilibrium effects are dominant, are compared with the dissociation of molecules to show the effects of dissociation on static temperature. Following the validation studies is a parametric analysis of a hypersonic inlet from Mach 6 to 20. Compressor performance is investigated for numerous cowl leading edge locations up to speeds of Mach 10. The variable cowl study showed positive trends in compressor performance parameters for a range of Mach numbers that arise from maximizing the intake of compressed flow. An interesting phenomenon due to the change in shock wave formation for different Mach numbers developed inside the cowl that had a negative influence on the total pressure recovery. Investigation of the hypersonic inlet at different altitudes is performed to study the effects of Reynolds number, and consequently, turbulent viscous effects on compressor performance. Turbulent boundary layer separation was noted as the cause for a change in compressor performance parameters due to a change in Reynolds number. This effect would not be

  3. Parametric analysis of geothermal residential heating and cooling application

    Energy Technology Data Exchange (ETDEWEB)

    Sagia, Zoi N.; Stegou, Athina B.; Rakopoulos, Constantinos D. [National Technical University of Athens, School of Mechanical Engineering, Department of Thermal Engineering, Heroon Polytechniou 9, 15780, Zografou, Attiki (Greece)

    2012-07-01

    A study is carried out to evaluate the efficiency of a Ground Source Heat Pump (GSHP) system with vertical heat exchangers applied to a three-storey terraced building, with total heated area 271.56 m2, standing on Hellinikon, Athens. The estimation of building loads is made with TRNSYS 16.1 using climatic data calculated by Meteonorm 6.1. The GSHP system is modeled with two other packages GLD 2009 and GLHEPRO 4.0. A comparison of the mean fluid temperature (fluid temperature in the borehole calculated as the average of exiting and entering fluid temperature), computed by above software, shows how close the results are. In addition, a parametric analysis is done to examine the influence of undisturbed ground temperature, ground heat exchanger (GHE) length and borehole separation distance to system’s operational characteristics so as to cover building loads. Finally, a 2D transient simulation is performed by means of COMSOL Multiphysics 4.0a. The carrier fluid in the borehole is modeled as a solid with extremely high thermal conductivity, extracting from and injecting to the ground the hourly load profile calculated by TRNSYS. The mean fluid temperature and the borehole wall temperature are computed for an entire year and compared with the values calculated by GLD.

  4. Parametric systems analysis of the Modular Stellarator Reactor (MSR)

    International Nuclear Information System (INIS)

    Miller, R.L.; Krakowski, R.A.; Bathke, C.G.

    1982-05-01

    The close coupling in the stellarator/torsatron/heliotron (S/T/H) between coil design (peak field, current density, forces), magnetics topology (transform, shear, well depth), and plasma performance (equilibrium, stability, transport, beta) complicates the reactor assessment more so than for most magnetic confinement systems. In order to provide an additional degree of resolution of this problem for the Modular Stellarator Reactor (MSR), a parametric systems model has been developed and applied. This model reduces key issues associted ith plasma performance, first-wall/blanket/shield (FW/B/S), and coil design to a simple relationship between beta, system geometry, and a number of indicators of overall plant performance. The results of this analysis can then be used to guide more detailed, multidimensional plasma, magnetics, and coil design efforts towards technically and economically viable operating regimes. In general, it is shown that beta values > 0.08 may be needed if the MSR approach is to be substantially competitive with other approaches to magnetic fusion in terms of system power density, mass utilization, and cost for total power output around 4.0 GWt; lower powers will require even higher betas

  5. Parametric analysis of stress in the ICF HYLIFE converter structure

    International Nuclear Information System (INIS)

    Hovingh, J.; Blink, J.A.

    1980-10-01

    The concept of a liquid-metal first wall in an ICF energy converter has a particularly attractive feature: the liquid metal absorbs the short-ranged fusion energy and moderates and attenuates the neutron energy so that the converter structure may have a lifetime similar to that of a conventional power plant. However, the sudden deposition of fusion energy in the liquid-metal first wall will result in disassembly of the liquid, which then impacts on the structure. The impact pressure on the structure is a strong function of the location and thickness of the liquid-metal first wall. The impact stress is determined by the impact pressure and duration and by the thickness and location of the structure. The maximum allowable stress is determined by the design stress criteria chosen by the structural designer. Scaling laws for the impact pressure as a function of the liquid-metal first wall location and mass are presented for a 2700 MW(f) (fusion power) plant with either one or four fusion reactor vessels. A methodology for determining the optimum combination of liquid-metal first wall geometry and first-structural-wall thickness is shown. Based on the methodology developed, a parametric analysis is presented of the liquid-metal flow rate and first-structural-wall requirements

  6. MOVES regional level sensitivity analysis

    Science.gov (United States)

    2012-01-01

    The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...

  7. Validation of statistical models for creep rupture by parametric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)

    2012-01-15

    Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).

  8. EV range sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ostafew, C. [Azure Dynamics Corp., Toronto, ON (Canada)

    2010-07-01

    This presentation included a sensitivity analysis of electric vehicle components on overall efficiency. The presentation provided an overview of drive cycles and discussed the major contributors to range in terms of rolling resistance; aerodynamic drag; motor efficiency; and vehicle mass. Drive cycles that were presented included: New York City Cycle (NYCC); urban dynamometer drive cycle; and US06. A summary of the findings were presented for each of the major contributors. Rolling resistance was found to have a balanced effect on each drive cycle and proportional to range. In terms of aerodynamic drive, there was a large effect on US06 range. A large effect was also found on NYCC range in terms of motor efficiency and vehicle mass. figs.

  9. Incorporating parametric uncertainty into population viability analysis models

    Science.gov (United States)

    McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.

    2011-01-01

    Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.

  10. Dynamic Simulation, Sensitivity and Uncertainty Analysis of a Demonstration Scale Lignocellulosic Enzymatic Hydrolysis Process

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail; Sin, Gürkan

    2014-01-01

    This study presents the uncertainty and sensitivity analysis of a lignocellulosic enzymatic hydrolysis model considering both model and feed parameters as sources of uncertainty. The dynamic model is parametrized for accommodating various types of biomass, and different enzymatic complexes...

  11. Trajectory optimization using indirect methods and parametric scramjet cycle analysis

    OpenAIRE

    Williams, Joseph

    2016-01-01

    This study investigates the solution of time sensitive regional strike trajectories for hypersonic missiles. This minimum time trajectory is suspected to be best performed by scramjet powered hypersonic missiles which creates strong coupled interaction between the flight dynamics and the performance of the engine. Comprehensive engine models are necessary to gain better insight into scramjet propulsion. Separately, robust and comprehensive trajectory analysis provides references for vehicles ...

  12. Non-Parametric Analysis of Rating Transition and Default Data

    DEFF Research Database (Denmark)

    Fledelius, Peter; Lando, David; Perch Nielsen, Jens

    2004-01-01

    We demonstrate the use of non-parametric intensity estimation - including construction of pointwise confidence sets - for analyzing rating transition data. We find that transition intensities away from the class studied here for illustration strongly depend on the direction of the previous move b...

  13. Perturbation analysis of a parametrically changed sine-Gordon equation

    DEFF Research Database (Denmark)

    Sakai, S.; Samuelsen, Mogens Rugholm; Olsen, O. H.

    1987-01-01

    A long Josephson junction with a spatially varying inductance is a physical manifestation of a modified sine-Gordon equation with parametric perturbation. Soliton propagation in such Josephson junctions is discussed. First, for an adiabatic model where the inductance changes smoothly compared...

  14. An empirical analysis of one, two, and three parametric logistic ...

    African Journals Online (AJOL)

    The purpose of this study was to determine the three parametric logistic IRT methods in dichotomous and ordinal test items due to differential item functioning using statistical DIF detection methods of SIBTEST, GMH, and LDFA. The study adopted instrumentation research design. The sample consisted of an intact class of ...

  15. Image sequence analysis in nuclear medicine: (1) Parametric imaging using statistical modelling

    International Nuclear Information System (INIS)

    Liehn, J.C.; Hannequin, P.; Valeyre, J.

    1989-01-01

    This is a review of parametric imaging methods on Nuclear Medicine. A Parametric Image is an image in which each pixel value is a function of the value of the same pixel of an image sequence. The Local Model Method is the fitting of each pixel time activity curve by a model which parameter values form the Parametric Images. The Global Model Method is the modelling of the changes between two images. It is applied to image comparison. For both methods, the different models, the identification criterion, the optimization methods and the statistical properties of the images are discussed. The analysis of one or more Parametric Images is performed using 1D or 2D histograms. The statistically significant Parametric Images, (Images of significant Variances, Amplitudes and Differences) are also proposed [fr

  16. A Study on the uncertainty and sensitivity in numerical simulation of parametric roll

    DEFF Research Database (Denmark)

    Choi, Ju-hyuck; Nielsen, Ulrik Dam; Jensen, Jørgen Juncher

    2016-01-01

    Uncertainties related to numerical modelling of parametric roll have been investigated by using a 6-DOFs model with nonlinear damping and roll restoring forces. At first, uncertainty on damping coefficients and its effect on the roll response is evaluated. Secondly, uncertainty due to the “effect...

  17. Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data

    DEFF Research Database (Denmark)

    Tan, Qihua; Thomassen, Mads; Burton, Mark

    2017-01-01

    the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray...... time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health....

  18. Performance Analysis Of Single-Pumped And Dual-Pumped Parametric Optical Amplifier

    Directory of Open Access Journals (Sweden)

    Sandar Myint

    2015-06-01

    Full Text Available Abstract In this study we present a performance analysis of single-pumped and dual- pumped parametric optical amplifier and present the analysis of gain flatness in dual- pumped Fiber Optical Parametric Amplifier FOPA based on four-wave mixing FWM. Result shows that changing the signal power and pump power give the various gains in FOPA. It is also found out that the parametric gain increase with increase in pump power and decrease in signal power. .Moreover in this paper the phase matching condition in FWM plays a vital role in predicting the gain profile of the FOPAbecause the parametric gain is maximum when the total phase mismatch is zero.In this paper single-pumped parametric amplification over a 50nm gain bandwidth is demonstrated using 500 nm highly nonlinear fiber HNLF and signal achieves about 31dB gain. For dual-pumped parametric amplification signal achieves 26.5dB gains over a 50nm gain bandwidth. Therefore dual-pumped parametric amplifier can provide relatively flat gain over a much wider bandwidth than the single-pumped FOPA.

  19. Analysis of family-wise error rates in statistical parametric mapping using random field theory.

    Science.gov (United States)

    Flandin, Guillaume; Friston, Karl J

    2017-11-01

    This technical report revisits the analysis of family-wise error rates in statistical parametric mapping-using random field theory-reported in (Eklund et al. []: arXiv 1511.01863). Contrary to the understandable spin that these sorts of analyses attract, a review of their results suggests that they endorse the use of parametric assumptions-and random field theory-in the analysis of functional neuroimaging data. We briefly rehearse the advantages parametric analyses offer over nonparametric alternatives and then unpack the implications of (Eklund et al. []: arXiv 1511.01863) for parametric procedures. Hum Brain Mapp, 2017. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  20. Strain gauge sensors comprised of carbon nanotube yarn: parametric numerical analysis of their piezoresistive response

    International Nuclear Information System (INIS)

    Abot, Jandro L; Kiyono, César Y; Thomas, Gilles P; Silva, Emílio C N

    2015-01-01

    Carbon nanotube (CNT) yarns are micron-size fibers that contain thousands of intertwined CNTs in their cross sections and exhibit piezoresistance characteristics that can be tapped for sensing purposes. Sensor yarns can be integrated into polymeric and composite materials to measure strain through resistance measurements without adding weight or altering the integrity of the host material. This paper includes the details of novel strain gauge sensor configurations comprised of CNT yarn, the numerical modeling of their piezoresistive response, and the parametric analysis schemes that determines the highest sensor sensitivity to mechanical loading. The effect of several sensor configuration parameters are discussed including the inclination and separation of the CNT yarns within the sensor, the mechanical properties of the CNT yarn, the direction and magnitude of the applied mechanical load, and the dimensions and shape of the sensor. The sensor configurations that yield the highest sensitivity are presented and discussed in terms of the mechanical and electrical properties of the CNT yarn. It is shown that strain gauge sensors consisting of CNT yarn are sensitive enough to measure strain, and could exhibit even higher gauge factors than those of metallic foil strain gauges. (paper)

  1. Parametric design and analysis framework with integrated dynamic models

    DEFF Research Database (Denmark)

    Negendahl, Kristoffer

    2014-01-01

    of building energy and indoor environment, are generally confined to late in the design process. Consequence based design is a framework intended for the early design stage. It involves interdisciplinary expertise that secures validity and quality assurance with a simulationist while sustaining autonomous...... control with the building designer. Consequence based design is defined by the specific use of integrated dynamic modeling, which includes the parametric capabilities of a scripting tool and building simulation features of a building performance simulation tool. The framework can lead to enhanced...

  2. The LMDZ4 general circulation model: climate performance and sensitivity to parametrized physics with emphasis on tropical convection

    Energy Technology Data Exchange (ETDEWEB)

    Hourdin, Frederic; Musat, Ionela; Bony, Sandrine; Codron, Francis; Dufresne, Jean-Louis; Fairhead, Laurent; Grandpeix, Jean-Yves; LeVan, Phu; Li, Zhao-Xin; Lott, Francois [CNRS/UPMC, Laboratoire de Meteorologie Dynamique (LMD/IPSL), Paris Cedex 05 (France); Braconnot, Pascale; Friedlingstein, Pierre [Laboratoire des Sciences du Climat et de l' Environnement (LSCE/IPSL), Saclay (France); Filiberti, Marie-Angele [Institut Pierre Simon Laplace (IPSL), Paris (France); Krinner, Gerhard [Laboratoire de Glaciologie et Geophysique de l' Environnement, Grenoble (France)

    2006-12-15

    The LMDZ4 general circulation model is the atmospheric component of the IPSL-CM4 coupled model which has been used to perform climate change simulations for the 4th IPCC assessment report. The main aspects of the model climatology (forced by observed sea surface temperature) are documented here, as well as the major improvements with respect to the previous versions, which mainly come form the parametrization of tropical convection. A methodology is proposed to help analyse the sensitivity of the tropical Hadley-Walker circulation to the parametrization of cumulus convection and clouds. The tropical circulation is characterized using scalar potentials associated with the horizontal wind and horizontal transport of geopotential (the Laplacian of which is proportional to the total vertical momentum in the atmospheric column). The effect of parametrized physics is analysed in a regime sorted framework using the vertical velocity at 500 hPa as a proxy for large scale vertical motion. Compared to Tiedtke's convection scheme, used in previous versions, the Emanuel's scheme improves the representation of the Hadley-Walker circulation, with a relatively stronger and deeper large scale vertical ascent over tropical continents, and suppresses the marked patterns of concentrated rainfall over oceans. Thanks to the regime sorted analyses, these differences are attributed to intrinsic differences in the vertical distribution of convective heating, and to the lack of self-inhibition by precipitating downdraughts in Tiedtke's parametrization. Both the convection and cloud schemes are shown to control the relative importance of large scale convection over land and ocean, an important point for the behaviour of the coupled model. (orig.)

  3. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

    Science.gov (United States)

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2015-05-01

    Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Parametric Studies of Square Solar Sails Using Finite Element Analysis

    Science.gov (United States)

    Sleight, David W.; Muheim, Danniella M.

    2004-01-01

    Parametric studies are performed on two generic square solar sail designs to identify parameters of interest. The studies are performed on systems-level models of full-scale solar sails, and include geometric nonlinearity and inertia relief, and use a Newton-Raphson scheme to apply sail pre-tensioning and solar pressure. Computational strategies and difficulties encountered during the analyses are also addressed. The purpose of this paper is not to compare the benefits of one sail design over the other. Instead, the results of the parametric studies may be used to identify general response trends, and areas of potential nonlinear structural interactions for future studies. The effects of sail size, sail membrane pre-stress, sail membrane thickness, and boom stiffness on the sail membrane and boom deformations, boom loads, and vibration frequencies are studied. Over the range of parameters studied, the maximum sail deflection and boom deformations are a nonlinear function of the sail properties. In general, the vibration frequencies and modes are closely spaced. For some vibration mode shapes, local deformation patterns that dominate the response are identified. These localized patterns are attributed to the presence of negative stresses in the sail membrane that are artifacts of the assumption of ignoring the effects of wrinkling in the modeling process, and are not believed to be physically meaningful. Over the range of parameters studied, several regions of potential nonlinear modal interaction are identified.

  5. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    Science.gov (United States)

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  6. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    KAUST Repository

    Navarro, María

    2016-12-26

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  7. On Parametric Sensitivity of Reynolds-Averaged Navier-Stokes SST Turbulence Model: 2D Hypersonic Shock-Wave Boundary Layer Interactions

    Science.gov (United States)

    Brown, James L.

    2014-01-01

    Examined is sensitivity of separation extent, wall pressure and heating to variation of primary input flow parameters, such as Mach and Reynolds numbers and shock strength, for 2D and Axisymmetric Hypersonic Shock Wave Turbulent Boundary Layer interactions obtained by Navier-Stokes methods using the SST turbulence model. Baseline parametric sensitivity response is provided in part by comparison with vetted experiments, and in part through updated correlations based on free interaction theory concepts. A recent database compilation of hypersonic 2D shock-wave/turbulent boundary layer experiments extensively used in a prior related uncertainty analysis provides the foundation for this updated correlation approach, as well as for more conventional validation. The primary CFD method for this work is DPLR, one of NASA's real-gas aerothermodynamic production RANS codes. Comparisons are also made with CFL3D, one of NASA's mature perfect-gas RANS codes. Deficiencies in predicted separation response of RANS/SST solutions to parametric variations of test conditions are summarized, along with recommendations as to future turbulence approach.

  8. Maternal sensitivity: a concept analysis.

    Science.gov (United States)

    Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae

    2008-11-01

    The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.

  9. Parametric study of an absorption refrigeration machine using advanced exergy analysis

    International Nuclear Information System (INIS)

    Gong, Sunyoung; Goni Boulama, Kiari

    2014-01-01

    An advanced exergy analysis of a water–lithium bromide absorption refrigeration machine was conducted. For each component of the machine, the proposed analysis quantified the irreversibility that can be avoided and the irreversibility that is unavoidable. It also identified the irreversibility originating from inefficiencies within the component and the irreversibility that does not originate from the operation of the considered component. It was observed that the desorber and absorber concentrated most of the exergy destruction. Furthermore, the exergy destruction at these components was found to be dominantly endogenous and unavoidable. A parametrical study has been presented discussing the sensitivity of the different performance indicators to the temperature at which the heat source is available, the temperature of the refrigerated environment, and the temperature of the cooling medium used at the condenser and absorber. It was observed that the endogenous avoidable exergy destruction at the desorber, i.e. the portion of the desorber irreversibility that could be avoided by improving the design and operation of the desorber, decreased when the heat source or the temperature at which the cooling effect was generated increased, and it decreased when the heat sink temperature increased. The endogenous avoidable exergy destruction at the absorber displayed the same variations, though it was observed to be less affected by the heat source temperature. Contrary to the aforementioned two components, the exergy destruction at the evaporator and condenser were dominantly endogenous and avoidable, with little sensitivity to the cycle operating parameters. - Highlights: • Endogenous, exogenous, avoidable and unavoidable irreversibilities were calculated for a water–LiBr absorption machine. • Overall, desorber and absorber concentrated most of the exergy destruction of the cycle. • The exergy destruction was mainly endogenous and unavoidable for the desorber and

  10. Experimental Demonstration of Phase Sensitive Parametric Processes in a Nano-Engineered Silicon Waveguide

    DEFF Research Database (Denmark)

    Kang, Ning; Fadil, Ahmed; Pu, Minhao

    2013-01-01

    We demonstrate experimentally phase-sensitive processes in nano-engineered silicon waveguides for the first time. Furthermore, we highlight paths towards the optimization of the phase-sensitive extinction ratio under the impact of two-photon and free-carrier absorption.......We demonstrate experimentally phase-sensitive processes in nano-engineered silicon waveguides for the first time. Furthermore, we highlight paths towards the optimization of the phase-sensitive extinction ratio under the impact of two-photon and free-carrier absorption....

  11. Parametric analysis of protective grid flow induced vibration

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Jooyoung; Eom, Kyongbo; Jeon, Sangyoun; Suh, Jungmin [KEPCO NF Co., Daejeon (Korea, Republic of)

    2012-10-15

    Protective grid (P-grid) flow-induced vibration in a nuclear power reactor is one of the critical factors for the mechanical integrity of a nuclear fuel. The P-grid is located at the lower most position above the bottom nozzle of the nuclear fuel as shown in Fig. 1, and it is required for not only filtering debris, but also supporting fuel rods. On the other hand, P-grid working conditions installed in a nuclear fuel in a reactor are severe in terms of flow speed, temperature and pressure. Considering such a severe condition of P-grid's functional performance in working environment, excessive vibration could be developed. Furthermore, if the P-grid is exposed to high levels of excessive vibration over a long period of time, fatigue failure could be unavoidable. Therefore, it is important to reduce excessive vibration while maintaining P-grid's own functional performance. KEPCO Nuclear Fuel has developed a test facility - Investigation Flow-induced Vibration (INFINIT) - to study flow-induced vibration caused by flowing coolant at various flow rates. To investigate specific relationships between configuration of P-grid and flow-induced vibration characteristics, several types of the P-grids were tested in INFINIT facility. And, based on the test results through parametric studies, the flow-induced vibration characteristics could be analyzed, and critical design parameters were found.

  12. Global optimization and sensitivity analysis

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1990-01-01

    A new direction for the analysis of nonlinear models of nuclear systems is suggested to overcome fundamental limitations of sensitivity analysis and optimization methods currently prevalent in nuclear engineering usage. This direction is toward a global analysis of the behavior of the respective system as its design parameters are allowed to vary over their respective design ranges. Presented is a methodology for global analysis that unifies and extends the current scopes of sensitivity analysis and optimization by identifying all the critical points (maxima, minima) and solution bifurcation points together with corresponding sensitivities at any design point of interest. The potential applicability of this methodology is illustrated with test problems involving multiple critical points and bifurcations and comprising both equality and inequality constraints

  13. Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.

    Science.gov (United States)

    Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben

    2017-06-06

    Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.

  14. Sensitivity of Nodal Admittances in an Offshore Wind Power Plant to Parametric Variations in the Collection Grid

    DEFF Research Database (Denmark)

    Vytautas, Kersiulis; Holdyk, Andrzej; Holbøll, Joachim

    2012-01-01

    The paper presents sensitivity studies on nodal admittances in the offshore wind farm to different parameters of the collection grid cable system, including length of cable sections and actual layout configuration. The main aspect of this investigation is to see how parametric variations influence...... admittance and, consequently, voltage transfer in frequency domain. The simulation model of the offshore wind farm was build based on the main components in turbines and the collection grid. The simulation results were compared with data from time domain measurements and showed good agreement. A number...... of nodes were selected for evaluation of the admittances: the connection point to the external grid, a point at the substation and at each wind turbine. The results show that at specific locations resonances occur, where the admittance increases significantly in the 10-100 kHz range, depending...

  15. Previously unidentified changes in renal cell carcinoma gene expression identified by parametric analysis of microarray data

    International Nuclear Information System (INIS)

    Lenburg, Marc E; Liou, Louis S; Gerry, Norman P; Frampton, Garrett M; Cohen, Herbert T; Christman, Michael F

    2003-01-01

    Renal cell carcinoma is a common malignancy that often presents as a metastatic-disease for which there are no effective treatments. To gain insights into the mechanism of renal cell carcinogenesis, a number of genome-wide expression profiling studies have been performed. Surprisingly, there is very poor agreement among these studies as to which genes are differentially regulated. To better understand this lack of agreement we profiled renal cell tumor gene expression using genome-wide microarrays (45,000 probe sets) and compare our analysis to previous microarray studies. We hybridized total RNA isolated from renal cell tumors and adjacent normal tissue to Affymetrix U133A and U133B arrays. We removed samples with technical defects and removed probesets that failed to exhibit sequence-specific hybridization in any of the samples. We detected differential gene expression in the resulting dataset with parametric methods and identified keywords that are overrepresented in the differentially expressed genes with the Fisher-exact test. We identify 1,234 genes that are more than three-fold changed in renal tumors by t-test, 800 of which have not been previously reported to be altered in renal cell tumors. Of the only 37 genes that have been identified as being differentially expressed in three or more of five previous microarray studies of renal tumor gene expression, our analysis finds 33 of these genes (89%). A key to the sensitivity and power of our analysis is filtering out defective samples and genes that are not reliably detected. The widespread use of sample-wise voting schemes for detecting differential expression that do not control for false positives likely account for the poor overlap among previous studies. Among the many genes we identified using parametric methods that were not previously reported as being differentially expressed in renal cell tumors are several oncogenes and tumor suppressor genes that likely play important roles in renal cell

  16. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    Science.gov (United States)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  17. Non-parametric production analysis of pesticides use in the Netherlands

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.; Silva, E.

    2004-01-01

    Many previous empirical studies on the productivity of pesticides suggest that pesticides are under-utilized in agriculture despite the general held believe that these inputs are substantially over-utilized. This paper uses data envelopment analysis (DEA) to calculate non-parametric measures of the

  18. Integration of phase change materials in compressed hydrogen gas systems: Modelling and parametric analysis

    DEFF Research Database (Denmark)

    Mazzucco, Andrea; Rothuizen, Erasmus; Jørgensen, Jens-Erik

    2016-01-01

    to the phase change material, mainly occurs after the fueling is completed, resulting in a hydrogen peak temperature higher than 85 C and a lower fueled mass than a gas-cooled system. Such a mass reduction accounts for 12% with respect to the case of a standard tank system fueled at 40 C. A parametric analysis...

  19. Rank-shaping regularization of exponential spectral analysis for application to functional parametric mapping

    International Nuclear Information System (INIS)

    Turkheimer, Federico E; Hinz, Rainer; Gunn, Roger N; Aston, John A D; Gunn, Steve R; Cunningham, Vincent J

    2003-01-01

    Compartmental models are widely used for the mathematical modelling of dynamic studies acquired with positron emission tomography (PET). The numerical problem involves the estimation of a sum of decaying real exponentials convolved with an input function. In exponential spectral analysis (SA), the nonlinear estimation of the exponential functions is replaced by the linear estimation of the coefficients of a predefined set of exponential basis functions. This set-up guarantees fast estimation and attainment of the global optimum. SA, however, is hampered by high sensitivity to noise and, because of the positivity constraints implemented in the algorithm, cannot be extended to reference region modelling. In this paper, SA limitations are addressed by a new rank-shaping (RS) estimator that defines an appropriate regularization over an unconstrained least-squares solution obtained through singular value decomposition of the exponential base. Shrinkage parameters are conditioned on the expected signal-to-noise ratio. Through application to simulated and real datasets, it is shown that RS ameliorates and extends SA properties in the case of the production of functional parametric maps from PET studies

  20. Sensitivity analysis of EQ3

    International Nuclear Information System (INIS)

    Horwedel, J.E.; Wright, R.Q.; Maerker, R.E.

    1990-01-01

    A sensitivity analysis of EQ3, a computer code which has been proposed to be used as one link in the overall performance assessment of a national high-level waste repository, has been performed. EQ3 is a geochemical modeling code used to calculate the speciation of a water and its saturation state with respect to mineral phases. The model chosen for the sensitivity analysis is one which is used as a test problem in the documentation of the EQ3 code. Sensitivities are calculated using both the CHAIN and ADGEN options of the GRESS code compiled under G-float FORTRAN on the VAX/VMS and verified by perturbation runs. The analyses were performed with a preliminary Version 1.0 of GRESS which contains several new algorithms that significantly improve the application of ADGEN. Use of ADGEN automates the implementation of the well-known adjoint technique for the efficient calculation of sensitivities of a given response to all the input data. Application of ADGEN to EQ3 results in the calculation of sensitivities of a particular response to 31,000 input parameters in a run time of only 27 times that of the original model. Moreover, calculation of the sensitivities for each additional response increases this factor by only 2.5 percent. This compares very favorably with a running-time factor of 31,000 if direct perturbation runs were used instead. 6 refs., 8 tabs

  1. High order depletion sensitivity analysis

    International Nuclear Information System (INIS)

    Naguib, K.; Adib, M.; Morcos, H.N.

    2002-01-01

    A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations

  2. Comparative Analysis and Modification of Imaging Techniques in the Parametric Studies of Control Systems

    Directory of Open Access Journals (Sweden)

    I. K. Romanova

    2017-01-01

    optimum. The main object of visualization has become a field of the anti-gradients, which was visualized both through the color-schemes, and through the radial compass-type plots. As a result of building these plots, the criteria have been revealed with anti-gradients in the opposite quadrants, i.e. having opposite trends. The next phase was to assess a degree of the joint effect of parameters on the system behavior. A scatter of gradients for an entire set of varied parameters was visualized, and statistical characteristics of distribution of a set of the anti-gradient vectors were calculated. Losses and winnings when choosing the compromise solutions through visualizations of a series of computational experiments were assessed.  The article shows dynamics of changing anti-gradient vectors with variations of parameters. All criteria have been combined in the HivePlots type image. According to the results of a joint analysis of the plots, were determined zones of returned values (inaccessibility values of criteria for a given structure of control, and a compromise solution in the unified field. Shows the application of the developed approaches to the multi-objective optimization problem when there are more than two criteria, namely, parametric synthesis of a double-circuit motion control system of the aircrafts the quality of which is described by four parameters: overshoot, transition time, rise time, and integral quadratic criterion of the deviation from a terminal trajectory.The developed technique of analysis allows us to identify possible compromise solutions, areas of sensitivity, mutual and dominant influence of parameters, and possible losses in trade-offs. The visual images are the task-supporting aid for a designer to make decision.

  3. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  4. Sensitivity analysis using probability bounding

    International Nuclear Information System (INIS)

    Ferson, Scott; Troy Tucker, W.

    2006-01-01

    Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values

  5. Analysis of brain SPECT with the statistical parametric mapping package SPM99

    International Nuclear Information System (INIS)

    Barnden, L.R.; Rowe, C.C.

    2000-01-01

    Full text: The Statistical Parametric Mapping (SPM) package of the Welcome Department of Cognitive Neurology permits detection in the brain of different regional uptake in an individual subject or a population of subjects compared to a normal population. SPM does not require a-priori specification of regions of interest. Recently SPM has been upgraded from SPM96 to SPM99. Our aim was to vary brain SPECT processing options in the application of SPM to optimise the final statistical map in three clinical trials. The sensitivity of SPM depends on the fidelity of the preliminary spatial normalisation of each scan to the standard anatomical space defined by a template scan provided with SPM. We generated our own SPECT template and compared spatial normalisation to it and to SPM's internal PET template. We also investigated the effects of scatter subtraction, stripping of scalp activity, reconstruction algorithm, non-linear deformation and derivation of spatial normalisation parameters using co-registered MR. Use of our SPECT template yielded better results than with SPM's PET template. Accuracy of SPECT to MR co-registration was 2.5mm with SPM96 and 1.2mm with SPM99. Stripping of scalp activity improved results with SPM96 but was unnecessary with SPM99. Scatter subtraction increased the sensitivity of SPM. Non-linear deformation additional to linear (affine) transformation only marginally improved the final result. Use of the SPECT template yielded more significant results than those obtained when co registered MR was used to derive the transformation parameters. SPM99 is more robust than SPM96 and optimum SPECT analysis requires a SPECT template. Copyright (2000) The Australian and New Zealand Society of Nuclear Medicine Inc

  6. The Gaussian atmospheric transport model and its sensitivity to the joint frequency distribution and parametric variability.

    Science.gov (United States)

    Hamby, D M

    2002-01-01

    Reconstructed meteorological data are often used in some form of long-term wind trajectory models for estimating the historical impacts of atmospheric emissions. Meteorological data for the straight-line Gaussian plume model are put into a joint frequency distribution, a three-dimensional array describing atmospheric wind direction, speed, and stability. Methods using the Gaussian model and joint frequency distribution inputs provide reasonable estimates of downwind concentration and have been shown to be accurate to within a factor of four. We have used multiple joint frequency distributions and probabilistic techniques to assess the Gaussian plume model and determine concentration-estimate uncertainty and model sensitivity. We examine the straight-line Gaussian model while calculating both sector-averaged and annual-averaged relative concentrations at various downwind distances. The sector-average concentration model was found to be most sensitive to wind speed, followed by horizontal dispersion (sigmaZ), the importance of which increases as stability increases. The Gaussian model is not sensitive to stack height uncertainty. Precision of the frequency data appears to be most important to meteorological inputs when calculations are made for near-field receptors, increasing as stack height increases.

  7. Parametric analysis of change in wave number of surface waves

    Directory of Open Access Journals (Sweden)

    Tadić Ljiljana

    2015-01-01

    Full Text Available The paper analyzes the dependence of the change wave number of materials soil constants, ie the frequency of the waves. The starting point in this analysis cosists of wave equation and dynamic stiffness matrix of soil.

  8. Parametric Design and Mechanical Analysis of Beams based on SINOVATION

    Science.gov (United States)

    Xu, Z. G.; Shen, W. D.; Yang, D. Y.; Liu, W. M.

    2017-07-01

    In engineering practice, engineer needs to carry out complicated calculation when the loads on the beam are complex. The processes of analysis and calculation take a lot of time and the results are unreliable. So VS2005 and ADK are used to develop a software for beams design based on the 3D CAD software SINOVATION with C ++ programming language. The software can realize the mechanical analysis and parameterized design of various types of beams and output the report of design in HTML format. Efficiency and reliability of design of beams are improved.

  9. Multilevel Latent Class Analysis: Parametric and Nonparametric Models

    Science.gov (United States)

    Finch, W. Holmes; French, Brian F.

    2014-01-01

    Latent class analysis is an analytic technique often used in educational and psychological research to identify meaningful groups of individuals within a larger heterogeneous population based on a set of variables. This technique is flexible, encompassing not only a static set of variables but also longitudinal data in the form of growth mixture…

  10. Parametric Sensitivity Study of Operating and Design Variables in Wellbore Heat Exchangers

    International Nuclear Information System (INIS)

    Nalla, G.; Shook, G.M.; Mines, G.L.; Bloomfield, K.K.

    2004-01-01

    This report documents the results of an extensive sensitivity study conducted by the Idaho National Engineering and Environmental Laboratory. This study investigated the effects of various operating and design parameters on wellbore heat exchanger performance to determine conditions for optimal thermal energy extraction and evaluate the potential for using a wellbore heat exchanger model for power generation. Variables studied included operational parameters such as circulation rates, wellbore geometries and working fluid properties, and regional properties including basal heat flux and formation rock type. Energy extraction is strongly affected by fluid residence time, heat transfer contact area, and formation thermal properties. Water appears to be the most appropriate working fluid. Aside from minimal tubing insulation, tubing properties are second order effects. On the basis of the sensitivity study, a best case model was simulated and the results compared against existing low-temperature power generation plants. Even assuming ideal work conversion to electric power, a wellbore heat exchange model cannot generate 200 kW (682.4e+3 BTU/h) at the onset of pseudosteady state. Using realistic conversion efficiency, the method is unlikely to generate 50 kW (170.6e+3 BTU/h)

  11. Integrating acoustic analysis in the architectural design process using parametric modelling

    DEFF Research Database (Denmark)

    Peters, Brady

    2011-01-01

    This paper discusses how parametric modeling techniques can be used to provide architectural designers with a better understanding of the acoustic performance of their designs and provide acoustic engineers with models that can be analyzed using computational acoustic analysis software. Architects......, acoustic performance can inform the geometry and material logic of the design. In this way, the architectural design and the acoustic analysis model become linked....

  12. Parametric and semiparametric models with applications to reliability, survival analysis, and quality of life

    CERN Document Server

    Nikulin, M; Mesbah, M; Limnios, N

    2004-01-01

    Parametric and semiparametric models are tools with a wide range of applications to reliability, survival analysis, and quality of life. This self-contained volume examines these tools in survey articles written by experts currently working on the development and evaluation of models and methods. While a number of chapters deal with general theory, several explore more specific connections and recent results in "real-world" reliability theory, survival analysis, and related fields.

  13. Parametric dynamic analysis of a superconducting bearing system

    Energy Technology Data Exchange (ETDEWEB)

    Cansiz, A; Hasar, U C; Cam, B Ates [Electrical and Electronics Engineering Department, Ataturk University, Erzurum (Turkey); Gundogdu, Oe, E-mail: acansiz@atauni.edu.t [Mechanical Engineering Department, Ataturk University, Erzurum (Turkey)

    2009-03-01

    The dynamics of a disk-shaped permanent-magnet rotor levitated over a high-temperature superconductor is studied. The interaction between the rotor magnet and the superconductor is modelled by assuming the magnet to be a magnetic dipole and the superconductor as a diamagnetic material. In the magneto-mechanical analysis of the superconductor part, the frozen image concept is combined with the diamagnetic image and the damping in the system was neglected. The interaction potential of the system is the combination of magnetic and gravitational potential. From the dynamical analysis, the equations of motion of the permanent magnet are stated as a function of lateral, vertical and tilt directions. The vibration behaviour of the permanent magnet is analyzed with a numerical calculation obtained by the non-dimensionalized differential equations for small initial impulses.

  14. Parametric dynamic analysis of a superconducting bearing system

    International Nuclear Information System (INIS)

    Cansiz, A; Hasar, U C; Cam, B Ates; Gundogdu, Oe

    2009-01-01

    The dynamics of a disk-shaped permanent-magnet rotor levitated over a high-temperature superconductor is studied. The interaction between the rotor magnet and the superconductor is modelled by assuming the magnet to be a magnetic dipole and the superconductor as a diamagnetic material. In the magneto-mechanical analysis of the superconductor part, the frozen image concept is combined with the diamagnetic image and the damping in the system was neglected. The interaction potential of the system is the combination of magnetic and gravitational potential. From the dynamical analysis, the equations of motion of the permanent magnet are stated as a function of lateral, vertical and tilt directions. The vibration behaviour of the permanent magnet is analyzed with a numerical calculation obtained by the non-dimensionalized differential equations for small initial impulses.

  15. Bifurcation analysis of parametrically excited bipolar disorder model

    Science.gov (United States)

    Nana, Laurent

    2009-02-01

    Bipolar II disorder is characterized by alternating hypomanic and major depressive episode. We model the periodic mood variations of a bipolar II patient with a negatively damped harmonic oscillator. The medications administrated to the patient are modeled via a forcing function that is capable of stabilizing the mood variations and of varying their amplitude. We analyze analytically, using perturbation method, the amplitude and stability of limit cycles and check this analysis with numerical simulations.

  16. Tool Support for Parametric Analysis of Large Software Simulation Systems

    Science.gov (United States)

    Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony

    2008-01-01

    The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.

  17. Parametric sensitivity of a CFD model concerning the hydrodynamics of trickle-bed reactor (TBR

    Directory of Open Access Journals (Sweden)

    Janecki Daniel

    2016-03-01

    Full Text Available The aim of the present study was to investigate the sensitivity of a multiphase Eulerian CFD model with respect to relations defining drag forces between phases. The mean relative error as well as standard deviation of experimental and computed values of pressure gradient and average liquid holdup were used as validation criteria of the model. Comparative basis for simulations was our own data-base obtained in experiments carried out in a TBR operating at a co-current downward gas and liquid flow. Estimated errors showed that the classical equations of Attou et al. (1999 defining the friction factors Fjk approximate experimental values of hydrodynamic parameters with the best agreement. Taking this into account one can recommend to apply chosen equations in the momentum balances of TBR.

  18. Numerical model of solar dynamic radiator for parametric analysis

    Science.gov (United States)

    Rhatigan, Jennifer L.

    1989-01-01

    Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations.

  19. Parametric analysis of a solar still with inverted V-shaped glass condenser

    Directory of Open Access Journals (Sweden)

    Rubio Eduardo

    2015-01-01

    Full Text Available A parametric analysis of a solar still with an inverted V-shaped glass condenser is presented. Results are based on a new mathematical model obtained from a lumped-parameter analysis of the still, with an approach that makes each glass plate of the condensing system sensitive to orientation and depicts its thermal differences. Numerical computations are made to evaluate productivity and temperature differences between the condensing plates as a function of condenser orientation, extinction coefficient and thickness. From this study it was found a significant influence of incident solar radiation on the thermal performance of each condensing plate. Large extinction coefficients and thick glass plates increase absorption losses that result in an appreciable temperature difference. An extinction coefficient of 40 m-1 produces a temperature difference of 2.5°C between both condensers. A glass thickness of 10 mm may increase this temperature difference up to 3.5°C. With respect to the production, due to the still orientation, a difference of 8.7% was found for the condensing plates facing an east-west direction. The proposed model is able to reproduce the temperature and distillate production differences that arise between both condensers in good agreement with experimental data. The overall performance of the still, studied with this new approach, was also in accordance with the widely used traditional models for solar distillation. In addition, the condensing plates parameters of the still can be used to force a differential heating such that for the whole day the temperature of one condensing plate is always higher.

  20. Sensitivity analysis in remote sensing

    CERN Document Server

    Ustinov, Eugene A

    2015-01-01

    This book contains a detailed presentation of general principles of sensitivity analysis as well as their applications to sample cases of remote sensing experiments. An emphasis is made on applications of adjoint problems, because they are more efficient in many practical cases, although their formulation may seem counterintuitive to a beginner. Special attention is paid to forward problems based on higher-order partial differential equations, where a novel matrix operator approach to formulation of corresponding adjoint problems is presented. Sensitivity analysis (SA) serves for quantitative models of physical objects the same purpose, as differential calculus does for functions. SA provides derivatives of model output parameters (observables) with respect to input parameters. In remote sensing SA provides computer-efficient means to compute the jacobians, matrices of partial derivatives of observables with respect to the geophysical parameters of interest. The jacobians are used to solve corresponding inver...

  1. The Absolute Stability Analysis in Fuzzy Control Systems with Parametric Uncertainties and Reference Inputs

    Science.gov (United States)

    Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei

    This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.

  2. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the

  3. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Directory of Open Access Journals (Sweden)

    Georgios Arampatzis

    Full Text Available Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of

  4. Sensitivity Analysis of Viscoelastic Structures

    Directory of Open Access Journals (Sweden)

    A.M.G. de Lima

    2006-01-01

    Full Text Available In the context of control of sound and vibration of mechanical systems, the use of viscoelastic materials has been regarded as a convenient strategy in many types of industrial applications. Numerical models based on finite element discretization have been frequently used in the analysis and design of complex structural systems incorporating viscoelastic materials. Such models must account for the typical dependence of the viscoelastic characteristics on operational and environmental parameters, such as frequency and temperature. In many applications, including optimal design and model updating, sensitivity analysis based on numerical models is a very usefull tool. In this paper, the formulation of first-order sensitivity analysis of complex frequency response functions is developed for plates treated with passive constraining damping layers, considering geometrical characteristics, such as the thicknesses of the multi-layer components, as design variables. Also, the sensitivity of the frequency response functions with respect to temperature is introduced. As an example, response derivatives are calculated for a three-layer sandwich plate and the results obtained are compared with first-order finite-difference approximations.

  5. Parametric cost analysis of a HYLIFE-II power plant

    International Nuclear Information System (INIS)

    Bieri, R.L.

    1991-01-01

    The SAFIRE (Systems Analysis for ICF Reactor Economics) code was adapted to model a power plant using a HYLIFE-2 reactor chamber. The code was then used to examine the dependence of the plant capital costs and the busbar cost of electricity (COE) on a variety of design parameters (type of driver, chamber repetition rate, and net electric power). The results show the most attractive operating space for each set of driver/target assumptions and quantify the benefits of improvements in key design parameters. The base-case plant was a 1000-MW(e) plant containing a reactor vessel driven by an induction linac heavy-ion accelerator, run at 8 Hz with a driver energy of 6.73 MJ and a target yield of 350 MJ. The total direct cost for this plant was $2.6 billion. (All costs in this paper are given in equivalent 1988 dollars.) The COE was 8.5 cents/(kWh). The COE and total capital costs for a 1000-MW(e) base plant are nearly independent of the chosen combination of repetition rate and driver energy for a driver operating between 4 and 10 Hz. For comparison, the COE for a coal or future fission plant would be 4.5--5.5 cents/(kWh). The COE for a 1000-MW(e) plant could be reduced to 7.5 cents/(kWh) by using advanced targets and could be cut to 6.5 cents/(kWh) with conventional targets, if the driver cost could be cut in half. There is a large economy of scale with heavy-ion-driven inertial confinement fusion (ICF) plants. A 2000-MW(e) plant with a heavy-ion driver and a HYLIFE-2 chamber would have a COE of only 5.8 cents/(kWh)

  6. Time-dependent reliability sensitivity analysis of motion mechanisms

    International Nuclear Information System (INIS)

    Wei, Pengfei; Song, Jingwen; Lu, Zhenzhou; Yue, Zhufeng

    2016-01-01

    Reliability sensitivity analysis aims at identifying the source of structure/mechanism failure, and quantifying the effects of each random source or their distribution parameters on failure probability or reliability. In this paper, the time-dependent parametric reliability sensitivity (PRS) analysis as well as the global reliability sensitivity (GRS) analysis is introduced for the motion mechanisms. The PRS indices are defined as the partial derivatives of the time-dependent reliability w.r.t. the distribution parameters of each random input variable, and they quantify the effect of the small change of each distribution parameter on the time-dependent reliability. The GRS indices are defined for quantifying the individual, interaction and total contributions of the uncertainty in each random input variable to the time-dependent reliability. The envelope function method combined with the first order approximation of the motion error function is introduced for efficiently estimating the time-dependent PRS and GRS indices. Both the time-dependent PRS and GRS analysis techniques can be especially useful for reliability-based design. This significance of the proposed methods as well as the effectiveness of the envelope function method for estimating the time-dependent PRS and GRS indices are demonstrated with a four-bar mechanism and a car rack-and-pinion steering linkage. - Highlights: • Time-dependent parametric reliability sensitivity analysis is presented. • Time-dependent global reliability sensitivity analysis is presented for mechanisms. • The proposed method is especially useful for enhancing the kinematic reliability. • An envelope method is introduced for efficiently implementing the proposed methods. • The proposed method is demonstrated by two real planar mechanisms.

  7. Multi parametric sensitivity study applied to temperature measurement of metallic plasma facing components in fusion devices

    International Nuclear Information System (INIS)

    Aumeunier, M-H.; Corre, Y.; Firdaouss, M.; Gauthier, E.; Loarer, T.; Travere, J-M.; Gardarein, J-L.; EFDA JET Contributor

    2013-06-01

    In nuclear fusion experiments, the protection system of the Plasma Facing Components (PFCs) is commonly ensured by infrared (IR) thermography. Nevertheless, the surface monitoring of new metallic plasma facing component, as in JET and ITER is being challenging. Indeed, the analysis of infrared signals is made more complicated in such a metallic environment since the signals will be perturbed by the reflected photons coming from high temperature regions. To address and anticipate this new measurement environment, predictive photonic models, based on Monte-Carlo ray tracing (SPEOS R CAA V5 Based), have been performed to assess the contribution of the reflective part in the total flux collected by the camera and the resulting temperature error. This paper deals with the effects of metals features, as the emissivity and reflectivity models, on the accuracy of the surface temperature estimation. The reliability of the features models is discussed by comparing the simulation with experimental data obtained with the wide angle IR thermography system of JET ITER like wall. The impact of the temperature distribution is studied by considering two different typical plasma scenarios, in limiter (ITER start-up scenario) and in X-point configurations (standard divertor scenario). The achievable measurement performances of IR system and risks analysis on its functionalities are discussed. (authors)

  8. An economic parametric analysis of the synthetic fuel produced by a fusion-fission complex

    International Nuclear Information System (INIS)

    Tai, A.S.; Krakowski, R.A.

    1980-01-01

    A simple analytic model is used to examine economic constraints of a fusion-fission complex in which a portion of a thermal energy is used for producing synthetic fuel (synfuel). Since the values of many quantities are not well-known, a parametric analysis has been carried out for testing the sensitivity of the synfuel production cost in relation to crucial economic and technological quantities (investment costs of hybrid and synfuel plants, energy multiplication of the fission blanket, recirculating power fraction of the fusion driver, etc.). In addition, a minimum synfuel selling price has been evaluated, from which the fission-fusion-synfuel complex brings about a higher economic benefit than does the fusion-fission hybrid entirely devoted to fissile-fuel and electricity generation. This paper describes the energy flow diagram of fusion-fission synfuel concept, express the revenue-to-cost formulation and the breakeven synfuel selling price. The synfuel production cost given by the model is evaluated within a range of values of crucial parameters. Assuming an electric cost of 2.7 cents/kWh, an annual investment cost per energy unit of 4.2 to 6 $/FJ for the fusion-fission complex and 1.5 to 3 $/GJ for the synfuel plant, the synfuel production cost lies between 6.5 and 8.5 $/GJ. These production costs can compete with those evaluated for other processes. The study points out a potential use of the fusion-fission hybrid reactor for other than fissile-fuel and electricity generation. (orig.) [de

  9. Towards the generation of a parametric foot model using principal component analysis: A pilot study.

    Science.gov (United States)

    Scarton, Alessandra; Sawacha, Zimi; Cobelli, Claudio; Li, Xinshan

    2016-06-01

    There have been many recent developments in patient-specific models with their potential to provide more information on the human pathophysiology and the increase in computational power. However they are not yet successfully applied in a clinical setting. One of the main challenges is the time required for mesh creation, which is difficult to automate. The development of parametric models by means of the Principle Component Analysis (PCA) represents an appealing solution. In this study PCA has been applied to the feet of a small cohort of diabetic and healthy subjects, in order to evaluate the possibility of developing parametric foot models, and to use them to identify variations and similarities between the two populations. Both the skin and the first metatarsal bones have been examined. Besides the reduced sample of subjects considered in the analysis, results demonstrated that the method adopted herein constitutes a first step towards the realization of a parametric foot models for biomechanical analysis. Furthermore the study showed that the methodology can successfully describe features in the foot, and evaluate differences in the shape of healthy and diabetic subjects. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  10. Fluid selection and parametric analysis on condensation temperature and plant height for a thermogravimetric heat pump

    International Nuclear Information System (INIS)

    Najafi, Behzad; Obando Vega, Pedro; Guilizzoni, Manfredo; Rinaldi, Fabio; Arosio, Sergio

    2015-01-01

    the system, while the COP values remain in a relatively small range. - Highlights: • The required plant height with different working fluids for a thermogravimetric heat pump was determined. • A fluid selection diagram including COP and the required height for different fluids was presented. • Sensitivity analysis to study the effect of height increasing factor on COP was performed. • Sensitivity analysis to investigate the effect of condensation temperature on the COP was also carried out

  11. A general first-order global sensitivity analysis method

    International Nuclear Information System (INIS)

    Xu Chonggang; Gertner, George Zdzislaw

    2008-01-01

    Fourier amplitude sensitivity test (FAST) is one of the most popular global sensitivity analysis techniques. The main mechanism of FAST is to assign each parameter with a characteristic frequency through a search function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency. Although FAST has been widely applied, there are two limitations: (1) the aliasing effect among parameters by using integer characteristic frequencies and (2) the suitability for only models with independent parameters. In this paper, we synthesize the improvement to overcome the aliasing effect limitation [Tarantola S, Gatelli D, Mara TA. Random balance designs for the estimation of first order global sensitivity indices. Reliab Eng Syst Safety 2006; 91(6):717-27] and the improvement to overcome the independence limitation [Xu C, Gertner G. Extending a global sensitivity analysis technique to models with correlated parameters. Comput Stat Data Anal 2007, accepted for publication]. In this way, FAST can be a general first-order global sensitivity analysis method for linear/nonlinear models with as many correlated/uncorrelated parameters as the user specifies. We apply the general FAST to four test cases with correlated parameters. The results show that the sensitivity indices derived by the general FAST are in good agreement with the sensitivity indices derived by the correlation ratio method, which is a non-parametric method for models with correlated parameters

  12. UMTS Common Channel Sensitivity Analysis

    DEFF Research Database (Denmark)

    Pratas, Nuno; Rodrigues, António; Santos, Frederico

    2006-01-01

    and as such it is necessary that both channels be available across the cell radius. This requirement makes the choice of the transmission parameters a fundamental one. This paper presents a sensitivity analysis regarding the transmission parameters of two UMTS common channels: RACH and FACH. Optimization of these channels...... is performed and values for the key transmission parameters in both common channels are obtained. On RACH these parameters are the message to preamble offset, the initial SIR target and the preamble power step while on FACH it is the transmission power offset....

  13. TEMAC, Top Event Sensitivity Analysis

    International Nuclear Information System (INIS)

    Iman, R.L.; Shortencarier, M.J.

    1988-01-01

    1 - Description of program or function: TEMAC is designed to permit the user to easily estimate risk and to perform sensitivity and uncertainty analyses with a Boolean expression such as produced by the SETS computer program. SETS produces a mathematical representation of a fault tree used to model system unavailability. In the terminology of the TEMAC program, such a mathematical representation is referred to as a top event. The analysis of risk involves the estimation of the magnitude of risk, the sensitivity of risk estimates to base event probabilities and initiating event frequencies, and the quantification of the uncertainty in the risk estimates. 2 - Method of solution: Sensitivity and uncertainty analyses associated with top events involve mathematical operations on the corresponding Boolean expression for the top event, as well as repeated evaluations of the top event in a Monte Carlo fashion. TEMAC employs a general matrix approach which provides a convenient general form for Boolean expressions, is computationally efficient, and allows large problems to be analyzed. 3 - Restrictions on the complexity of the problem - Maxima of: 4000 cut sets, 500 events, 500 values in a Monte Carlo sample, 16 characters in an event name. These restrictions are implemented through the FORTRAN 77 PARAMATER statement

  14. Nonlinear Dynamical Analysis for the Cable Excited with Parametric and Forced Excitation

    Directory of Open Access Journals (Sweden)

    C. Z. Qian

    2014-01-01

    Full Text Available Considering the deck vibration effect on the cable in cable-stayed bridge, using nonlinear structure dynamics theory, the nonlinear dynamical equation for the stayed cable excited with deck vibration is proposed. Research shows that the vertical vibration of the deck has a combined parametric and forced excitation effect on the cable when the angle of the cable is taken into consideration. Using multiscale method, the 1/2 principle parametric resonance is studied and the bifurcation equation is obtained. Despite the parameters analysis, the bifurcation characters of the dynamical system are studied. At last, by means of numerical method and software MATHMATIC, the effect rules of system parameters to the dynamical behavior of the system are studied, and some useful conclusions are obtained.

  15. MR diffusion tensor analysis of schizophrenic brain using statistical parametric mapping

    International Nuclear Information System (INIS)

    Yamada, Haruyasu; Abe, Osamu; Kasai, Kiyoto

    2005-01-01

    The purpose of this study is to investigate diffusion anisotropy in the schizophrenic brain by voxel-based analysis of diffusion tensor imaging (DTI), using statistical parametric mapping (SPM). We studied 33 patients with schizophrenia diagnosed by diagnostic and statistical manual of mental disorders (DSM)-IV criteria and 42 matched controls. The data was obtained with a 1.5 T MRI system. We used single-shot spin-echo planar sequences (repetition time/echo time (TR/TE)=5000/102 ms, 5 mm slice thickness and 1.5 mm gap, field of view (FOV)=21 x 21 cm 2 , number of excitation (NEX)=4, 128 x 128 pixel matrix) for diffusion tensor acquisition. Diffusion gradients (b-value of 500 or 1000 s/mm 2 ) were applied on two axes simultaneously. Diffusion properties were measured along 6 non-linear directions. The structural distortion induced by the large diffusion gradients was corrected, based on each T 2 -weighted echo-planar image (b=0 s/mm 2 ). The fractional anisotropy (FA) maps were generated on a voxel-by-voxel basis. T 2 -weighted echo-planar images were then segmented into gray matter, white matter, and cerebrospinal fluid, using SPM (Wellcome Department of Imaging, University College London, UK). All apparent diffusion coefficient (ADC) and FA maps in native space were transformed to the stereotactic space by registering each of the images to the same template image. The normalized data was smoothed and analyzed using SPM. The significant FA decrease in the patient group was found in the uncinate fasciculus, parahippocampal white matter, anterior cingulum and other areas (corrected p<0.05). No significant increased region was noted. Our results may reflect reduced diffusion anisotropy of the white matter pathway of the limbic system as shown by the decreased FA. Manual region-of-interest analysis is usually more sensitive than voxel-based analysis, but it is subjective and difficult to set with anatomical reproducibility. Voxel-based analysis of the diffusion tensor

  16. Trend Analysis of Pahang River Using Non-Parametric Analysis: Mann Kendalls Trend Test

    International Nuclear Information System (INIS)

    Nur Hishaam Sulaiman; Mohd Khairul Amri Kamarudin; Mohd Khairul Amri Kamarudin; Ahmad Dasuki Mustafa; Muhammad Azizi Amran; Fazureen Azaman; Ismail Zainal Abidin; Norsyuhada Hairoma

    2015-01-01

    Flood is common in Pahang especially during northeast monsoon season from November to February. Three river cross station: Lubuk Paku, Sg. Yap and Temerloh were selected as area of this study. The stream flow and water level data were gathered from DID record. Data set for this study were analysed by using non-parametric analysis, Mann-Kendall Trend Test. The results that obtained from stream flow and water level analysis indicate that there are positively significant trend for Lubuk Paku (0.001) and Sg. Yap (<0.0001) from 1972-2011 with the p-value < 0.05. Temerloh (0.178) data from 1963-2011 recorded no trend for stream flow parameter but negative trend for water level parameter. Hydrological pattern and trend are extremely affected by outside factors such as north east monsoon season that occurred in South China Sea and affected Pahang during November to March. There are other factors such as development and management of the areas which can be considered as factors affected the data and results. Hydrological Pattern is important to indicate the river trend such as stream flow and water level. It can be used as flood mitigation by local authorities. (author)

  17. Exergy analysis, parametric analysis and optimization for a novel combined power and ejector refrigeration cycle

    International Nuclear Information System (INIS)

    Dai Yiping; Wang Jiangfeng; Gao Lin

    2009-01-01

    A new combined power and refrigeration cycle is proposed, which combines the Rankine cycle and the ejector refrigeration cycle. This combined cycle produces both power output and refrigeration output simultaneously. It can be driven by the flue gas of gas turbine or engine, solar energy, geothermal energy and industrial waste heats. An exergy analysis is performed to guide the thermodynamic improvement for this cycle. And a parametric analysis is conducted to evaluate the effects of the key thermodynamic parameters on the performance of the combined cycle. In addition, a parameter optimization is achieved by means of genetic algorithm to reach the maximum exergy efficiency. The results show that the biggest exergy loss due to the irreversibility occurs in heat addition processes, and the ejector causes the next largest exergy loss. It is also shown that the turbine inlet pressure, the turbine back pressure, the condenser temperature and the evaporator temperature have significant effects on the turbine power output, refrigeration output and exergy efficiency of the combined cycle. The optimized exergy efficiency is 27.10% under the given condition.

  18. Optimization of suspension system and sensitivity analysis for improvement of stability in a midsize heavy vehicle

    Directory of Open Access Journals (Sweden)

    Emre Sert

    2017-06-01

    In summary, within the scope of this work, unlike the previous studies, experiments involving physical tests (i.e. tilt table, fishhook and cornering and numerical calculations are included. In addition, verification of the virtual model, parametric sensitivity analysis and the comparison of the virtual test and the physical test is performed. Because of the vigorous verification, sensitivity analysis and validation process, the results can be more reliable compared to previous studies.

  19. Parametric Phase-sensitive and Phase-insensitive All-optical Signal Processing on Multiple Nonlinear Platforms - Invited talk

    DEFF Research Database (Denmark)

    Peucheret, Christophe; Da Ros, Francesco; Vukovic, Dragana

    Parametric processes in materials presenting a second- or third-order nonlinearity have been widely used to demonstrate a wide range of all-optical signal processing functionalities, including amplication, wavelength conversion, regeneration, sampling, switching, modulation format conver- sion, o...

  20. Analysis of survival in breast cancer patients by using different parametric models

    Science.gov (United States)

    Enera Amran, Syahila; Asrul Afendi Abdullah, M.; Kek, Sie Long; Afiqah Muhamad Jamil, Siti

    2017-09-01

    In biomedical applications or clinical trials, right censoring was often arising when studying the time to event data. In this case, some individuals are still alive at the end of the study or lost to follow up at a certain time. It is an important issue to handle the censoring data in order to prevent any bias information in the analysis. Therefore, this study was carried out to analyze the right censoring data with three different parametric models; exponential model, Weibull model and log-logistic models. Data of breast cancer patients from Hospital Sultan Ismail, Johor Bahru from 30 December 2008 until 15 February 2017 was used in this study to illustrate the right censoring data. Besides, the covariates included in this study are the time of breast cancer infection patients survive t, age of each patients X1 and treatment given to the patients X2 . In order to determine the best parametric models in analysing survival of breast cancer patients, the performance of each model was compare based on Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and log-likelihood value using statistical software R. When analysing the breast cancer data, all three distributions were shown consistency of data with the line graph of cumulative hazard function resembles a straight line going through the origin. As the result, log-logistic model was the best fitted parametric model compared with exponential and Weibull model since it has the smallest value in AIC and BIC, also the biggest value in log-likelihood.

  1. System-Level Sensitivity Analysis of SiNW-bioFET-Based Biosensing Using Lockin Amplification

    DEFF Research Database (Denmark)

    Patou, François; Dimaki, Maria; Kjærgaard, Claus

    2017-01-01

    carry out for the first time the system-level sensitivity analysis of a generic SiNW-bioFET model coupled to a custom-design instrument based on the lock-in amplifier. By investigating a large parametric space spanning over both sensor and instrumentation specifications, we demonstrate that systemwide...

  2. Parametric analysis of electromechanical and fatigue performance of total knee replacement bearing with embedded piezoelectric transducers

    Science.gov (United States)

    Safaei, Mohsen; Meneghini, R. Michael; Anton, Steven R.

    2017-09-01

    Total knee arthroplasty is a common procedure in the United States; it has been estimated that about 4 million people are currently living with primary knee replacement in this country. Despite huge improvements in material properties, implant design, and surgical techniques, some implants fail a few years after surgery. A lack of information about in vivo kinetics of the knee prevents the establishment of a correlated intra- and postoperative loading pattern in knee implants. In this study, a conceptual design of an ultra high molecular weight (UHMW) knee bearing with embedded piezoelectric transducers is proposed, which is able to measure the reaction forces from knee motion as well as harvest energy to power embedded electronics. A simplified geometry consisting of a disk of UHMW with a single embedded piezoelectric ceramic is used in this work to study the general parametric trends of an instrumented knee bearing. A combined finite element and electromechanical modeling framework is employed to investigate the fatigue behavior of the instrumented bearing and the electromechanical performance of the embedded piezoelectric. The model is validated through experimental testing and utilized for further parametric studies. Parametric studies consist of the investigation of the effects of several dimensional and piezoelectric material parameters on the durability of the bearing and electrical output of the transducers. Among all the parameters, it is shown that adding large fillet radii results in noticeable improvement in the fatigue life of the bearing. Additionally, the design is highly sensitive to the depth of piezoelectric pocket. Finally, using PZT-5H piezoceramics, higher voltage and slightly enhanced fatigue life is achieved.

  3. Parametric analysis of a down-scaled turbo jet engine suitable for drone and UAV propulsion

    Science.gov (United States)

    Wessley, G. Jims John; Chauhan, Swati

    2018-04-01

    This paper presents a detailed study on the need for downscaling gas turbine engines for UAV and drone propulsion. Also, the procedure for downscaling and the parametric analysis of a downscaled engine using Gas Turbine Simulation Program software GSP 11 is presented. The need for identifying a micro gas turbine engine in the thrust range of 0.13 to 4.45 kN to power UAVs and drones weighing in the range of 4.5 to 25 kg is considered and in order to meet the requirement a parametric analysis on the scaled down Allison J33-A-35 Turbojet engine is performed. It is evident from the analysis that the thrust developed by the scaled engine and the Thrust Specific Fuel Consumption TSFC depends on pressure ratio, mass flow rate of air and Mach number. A scaling factor of 0.195 corresponding to air mass flow rate of 7.69 kg/s produces a thrust in the range of 4.57 to 5.6 kN while operating at a Mach number of 0.3 within the altitude of 5000 to 9000 m. The thermal and overall efficiency of the scaled engine is found to be 67% and 75% respectively for a pressure ratio of 2. The outcomes of this analysis form a strong base for further analysis, design and fabrication of micro gas turbine engines to propel future UAVs and drones.

  4. An approach to multi-attribute utility analysis under parametric uncertainty

    International Nuclear Information System (INIS)

    Kelly, M.; Thorne, M.C.

    2001-01-01

    The techniques of cost-benefit analysis and multi-attribute analysis provide a useful basis for informing decisions in situations where a number of potentially conflicting opinions or interests need to be considered, and where there are a number of possible decisions that could be adopted. When the input data to such decision-making processes are uniquely specified, cost-benefit analysis and multi-attribute utility analysis provide unambiguous guidance on the preferred decision option. However, when the data are not uniquely specified, application and interpretation of these techniques is more complex. Herein, an approach to multi-attribute utility analysis (and hence, as a special case, cost-benefit analysis) when input data are subject to parametric uncertainty is presented. The approach is based on the use of a Monte Carlo technique, and has recently been applied to options for the remediation of former uranium mining liabilities in a number of Central and Eastern European States

  5. CATDAT - A program for parametric and nonparametric categorical data analysis user's manual, Version 1.0

    International Nuclear Information System (INIS)

    Peterson, James R.; Haas, Timothy C.; Lee, Danny C.

    2000-01-01

    Natural resource professionals are increasingly required to develop rigorous statistical models that relate environmental data to categorical responses data. Recent advances in the statistical and computing sciences have led to the development of sophisticated methods for parametric and nonparametric analysis of data with categorical responses. The statistical software package CATDAT was designed to make some of these relatively new and powerful techniques available to scientists. The CATDAT statistical package includes 4 analytical techniques: generalized logit modeling; binary classification tree; extended K-nearest neighbor classification; and modular neural network

  6. Structural-functional and parametric analysis of the social function of pharmaceutical industry

    Directory of Open Access Journals (Sweden)

    N. O. Tkachenko

    2016-12-01

    Full Text Available Pharmacy has always had a special (social value and was sensitive to the new social changes in the society and the state. These changes allow better understand the issues associated with increasing the efficiency of pharmaceutical care to the population. The aim of the work: identify, justify and to summarize the main elements of the social function of pharmacy, as a component of health care system, to further evaluate the properties of the pharmaceutical industry as a system. Materials and methods. To achieve this goal the principle of a systematic approach and the complex of research methods such as structural, functional and parametric analysis, logical knowledge and comparison, ad also generalization have been used. As materials of research, we used the results of fundamental and applied research of national and foreign experts on the issue. Results and discussion. The basic principles of the welfare state and pharmacy as socially oriented sectors of the economy have been determined. We have found that the pharmaceutical industry is an agent, which implements a number of elements of the social function, such as pharmaceutical assistance to the population, the production of social goods (drugs, medical products, medical cosmetics, etc., creating and providing of working places, paying taxes (replenishment of the state budget, the formation and development of human capital, research and innovation activities, charity and sponsorship, environmental protection. Ukraine formally acceded to United Nations document, «Agenda for the XXI Century». This agreement commits our government to implement development and implementation of sustainable development strategies. Its main components are the social responsibility, social integration, an efficient worker and effective owner. Social responsibility acts as a reverse reaction on realization of social policy through the main sectors of the economy. Conclusions. We have summarized the information and

  7. Data fusion qualitative sensitivity analysis

    International Nuclear Information System (INIS)

    Clayton, E.A.; Lewis, R.E.

    1995-09-01

    Pacific Northwest Laboratory was tasked with testing, debugging, and refining the Hanford Site data fusion workstation (DFW), with the assistance of Coleman Research Corporation (CRC), before delivering the DFW to the environmental restoration client at the Hanford Site. Data fusion is the mathematical combination (or fusion) of disparate data sets into a single interpretation. The data fusion software used in this study was developed by CRC. The data fusion software developed by CRC was initially demonstrated on a data set collected at the Hanford Site where three types of data were combined. These data were (1) seismic reflection, (2) seismic refraction, and (3) depth to geologic horizons. The fused results included a contour map of the top of a low-permeability horizon. This report discusses the results of a sensitivity analysis of data fusion software to variations in its input parameters. The data fusion software developed by CRC has a large number of input parameters that can be varied by the user and that influence the results of data fusion. Many of these parameters are defined as part of the earth model. The earth model is a series of 3-dimensional polynomials with horizontal spatial coordinates as the independent variables and either subsurface layer depth or values of various properties within these layers (e.g., compression wave velocity, resistivity) as the dependent variables

  8. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    Science.gov (United States)

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  9. [Detection of quadratic phase coupling between EEG signal components by nonparamatric and parametric methods of bispectral analysis].

    Science.gov (United States)

    Schmidt, K; Witte, H

    1999-11-01

    Recently the assumption of the independence of individual frequency components in a signal has been rejected, for example, for the EEG during defined physiological states such as sleep or sedation [9, 10]. Thus, the use of higher-order spectral analysis capable of detecting interrelations between individual signal components has proved useful. The aim of the present study was to investigate the quality of various non-parametric and parametric estimation algorithms using simulated as well as true physiological data. We employed standard algorithms available for the MATLAB. The results clearly show that parametric bispectral estimation is superior to non-parametric estimation in terms of the quality of peak localisation and the discrimination from other peaks.

  10. Parametric Analysis of Design Parameter Effects on the Performance of a Solar Desiccant Evaporative Cooling System in Brisbane, Australia

    Directory of Open Access Journals (Sweden)

    Yunlong Ma

    2017-06-01

    Full Text Available Solar desiccant cooling is widely considered as an attractive replacement for conventional vapor compression air conditioning systems because of its environmental friendliness and energy efficiency advantages. The system performance of solar desiccant cooling strongly depends on the input parameters associated with the system components, such as the solar collector, storage tank and backup heater, etc. In order to understand the implications of different design parameters on the system performance, this study has conducted a parametric analysis on the solar collector area, storage tank volume, and backup heater capacity of a solid solar desiccant cooling system for an office building in Brisbane, Australia climate. In addition, a parametric analysis on the outdoor air humidity ratio control set-point which triggers the operation of the desiccant wheel has also been investigated. The simulation results have shown that either increasing the storage tank volume or increasing solar collector area would result in both increased solar fraction (SF and system coefficient of performance (COP, while at the same time reduce the backup heater energy consumption. However, the storage tank volume is more sensitive to the system performance than the collector area. From the economic aspect, a storage capacity of 30 m3/576 m2 has the lowest life cycle cost (LCC of $405,954 for the solar subsystem. In addition, 100 kW backup heater capacity is preferable for the satisfaction of the design regeneration heating coil hot water inlet temperature set-point with relatively low backup heater energy consumption. Moreover, an outdoor air humidity ratio control set-point of 0.008 kgWater/kgDryAir is more reasonable, as it could both guarantee the indoor design conditions and achieve low backup heater energy consumption.

  11. Parametric analysis of the statistical model of the stick-slip process

    Science.gov (United States)

    Lima, Roberta; Sampaio, Rubens

    2017-06-01

    In this paper it is performed a parametric analysis of the statistical model of the response of a dry-friction oscillator. The oscillator is a spring-mass system which moves over a base with a rough surface. Due to this roughness, the mass is subject to a dry-frictional force modeled as a Coulomb friction. The system is stochastically excited by an imposed bang-bang base motion. The base velocity is modeled by a Poisson process for which a probabilistic model is fully specified. The excitation induces in the system stochastic stick-slip oscillations. The system response is composed by a random sequence alternating stick and slip-modes. With realizations of the system, a statistical model is constructed for this sequence. In this statistical model, the variables of interest of the sequence are modeled as random variables, as for example, the number of time intervals in which stick or slip occur, the instants at which they begin, and their duration. Samples of the system response are computed by integration of the dynamic equation of the system using independent samples of the base motion. Statistics and histograms of the random variables which characterize the stick-slip process are estimated for the generated samples. The objective of the paper is to analyze how these estimated statistics and histograms vary with the system parameters, i.e., to make a parametric analysis of the statistical model of the stick-slip process.

  12. Rapid and sensitive trace gas detection with continuous wave Optical Parametric Oscillator-based Wavelength Modulation Spectroscopy

    NARCIS (Netherlands)

    Arslanov, D.D.; Spunei, M.; Ngai, A.K.Y.; Cristescu, S.M.; Lindsay, I.D.; Lindsay, I.D.; Boller, Klaus J.; Persijn, S.T.; Harren, F.J.M.

    2011-01-01

    A fiber-amplified Distributed Bragg Reflector diode laser is used to pump a continuous wave, singly resonant Optical Parametric Oscillator (OPO). The output radiation covers the 3–4 μm with ability of rapid (100 THz/s) and broad mode-hop-free tuning (5 cm−1). Wavelength Modulation Spectroscopy is

  13. Frequency domain analysis and design of nonlinear systems based on Volterra series expansion a parametric characteristic approach

    CERN Document Server

    Jing, Xingjian

    2015-01-01

    This book is a systematic summary of some new advances in the area of nonlinear analysis and design in the frequency domain, focusing on the application oriented theory and methods based on the GFRF concept, which is mainly done by the author in the past 8 years. The main results are formulated uniformly with a parametric characteristic approach, which provides a convenient and novel insight into nonlinear influence on system output response in terms of characteristic parameters and thus facilitate nonlinear analysis and design in the frequency domain.  The book starts with a brief introduction to the background of nonlinear analysis in the frequency domain, followed by recursive algorithms for computation of GFRFs for different parametric models, and nonlinear output frequency properties. Thereafter the parametric characteristic analysis method is introduced, which leads to the new understanding and formulation of the GFRFs, and nonlinear characteristic output spectrum (nCOS) and the nCOS based analysis a...

  14. Parametric economic analysis of natural gas reburn technologies. Topical report, June 1991-June 1992

    International Nuclear Information System (INIS)

    Bluestein, J.

    1992-06-01

    The report presents a parametric economic analysis of natural gas reburn technologies used for control of nitrogen oxides emissions in coal-fired utility boilers. It is a competitive assessment of the economics of gas reburn performed in the context of regulatory requirements and competing conventional technologies. The reburn technologies examined are basic gas reburn, reburn with sorbent injection and advanced gas reburn. The analysis determined the levelized costs of these technologies in $/ton of NOx removed with respect to a gas-coal price differential in $/MMBtu of energy input. For those niches in which reburn was less economical, a breakeven capital cost analysis was carried out to determine the R ampersand D goals which would make reburn more cost competitive

  15. Probabilistic sensitivity analysis of biochemical reaction systems.

    Science.gov (United States)

    Zhang, Hong-Xuan; Dempsey, William P; Goutsias, John

    2009-09-07

    Sensitivity analysis is an indispensable tool for studying the robustness and fragility properties of biochemical reaction systems as well as for designing optimal approaches for selective perturbation and intervention. Deterministic sensitivity analysis techniques, using derivatives of the system response, have been extensively used in the literature. However, these techniques suffer from several drawbacks, which must be carefully considered before using them in problems of systems biology. We develop here a probabilistic approach to sensitivity analysis of biochemical reaction systems. The proposed technique employs a biophysically derived model for parameter fluctuations and, by using a recently suggested variance-based approach to sensitivity analysis [Saltelli et al., Chem. Rev. (Washington, D.C.) 105, 2811 (2005)], it leads to a powerful sensitivity analysis methodology for biochemical reaction systems. The approach presented in this paper addresses many problems associated with derivative-based sensitivity analysis techniques. Most importantly, it produces thermodynamically consistent sensitivity analysis results, can easily accommodate appreciable parameter variations, and allows for systematic investigation of high-order interaction effects. By employing a computational model of the mitogen-activated protein kinase signaling cascade, we demonstrate that our approach is well suited for sensitivity analysis of biochemical reaction systems and can produce a wealth of information about the sensitivity properties of such systems. The price to be paid, however, is a substantial increase in computational complexity over derivative-based techniques, which must be effectively addressed in order to make the proposed approach to sensitivity analysis more practical.

  16. Brain F-18 FDG PET for localization of epileptogenic zones in frontal lobe epilepsy: visual assessment and statistical parametric mapping analysis

    International Nuclear Information System (INIS)

    Kim, Yu Kyeong; Lee, Dong Soo; Lee, Sang Kun; Chung, Chun Kee; Yeo, Jeong Seok; Chung, June Key; Lee, Myung Chul

    2001-01-01

    We evaluated the sensitivity of the F-18 FDG PET by visual assessment and statistical parametric mapping (SPM) analysis for the localization of the epileptogenic zones in frontal lobe epilepsy. Twenty-four patients with frontal lobe epilepsy were examined. All patients exhibited improvements after surgical resection (Engel class I or II). Upon pathological examination, 18 patients revealed cortical dysplasia, 4 patients revealed tumor, and 2 patients revealed cortical scar. The hypometabolic lesions were found in F-18 FDG PET by visual assessment and SPM analysis. On SPM analysis, cutoff threshold was changed. MRI showed structural lesions in 12 patients and normal results in the remaining 12. F-18 FDG PET correctly localized epileptogenic zones in 13 patients (54%) by visual assessment. Sensitivity of F-18 FDG PET in MR-negative patients (50%) was similar to that in MR-positive patients (67%). On SPM analysis, sensitivity deceased according to the decrease of p value. Using uncorrected p value of 0.05 as threshold, sensitivity of SPM analysis was 63%, which was not statistically different from that of visual assessment. F-18 FDG PET was sensitive in finding epileptogenic zones by revealing hypometabolic areas even in MR-negative patients with frontal lobe epilepsy as well as in MR-positive patients. SPM analysis showed comparable sensitivity to visual assessment and could be used as an aid in the diagnosis of epileptogenic zones in frontal lobe epilepsy

  17. Sensitivity Analysis of a Physiochemical Interaction Model ...

    African Journals Online (AJOL)

    In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 J. Appl. Sci. Environ. Manage. June, 2012, Vol.

  18. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    Science.gov (United States)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  19. Combined-cycle steam section parametric analysis by thermo-economic simulation

    International Nuclear Information System (INIS)

    Macor, A.; Reini, M.

    1991-01-01

    In the case of industrial cogeneration plants, thermal power production is, in general, strictly dependent on the technological requirements of the production cycle, whereas, the electrical power which is produced can be auto- consumed or ceded to the utility grid. In both cases, an economic worth is given to this energy which influences the overall economic feasibility of the plant. The purpose of this paper is to examine parametric inter-relationships between economic and thermodynamic performance optimization techniques. Comparisons are then made of the results obtained with the use of the thermo- economic analysis technique suggested in this paper with those obtained with the use of indicators in other exergo-economic analysis techniques

  20. EFFECTS OF PARAMETRIC VARIATIONS ON SEISMIC ANALYSIS METHODS FOR NON-CLASSICALLY DAMPED COUPLED SYSTEMS

    International Nuclear Information System (INIS)

    XU, J.; DEGRASSI, G.

    2000-01-01

    A comprehensive benchmark program was developed by Brookhaven National Laboratory (BNL) to perform an evaluation of state-of-the-art methods and computer programs for performing seismic analyses of coupled systems with non-classical damping. The program, which was sponsored by the US Nuclear Regulatory Commission (NRC), was designed to address various aspects of application and limitations of these state-of-the-art analysis methods to typical coupled nuclear power plant (NPP) structures with non-classical damping, and was carried out through analyses of a set of representative benchmark problems. One objective was to examine the applicability of various analysis methods to problems with different dynamic characteristics unique to coupled systems. The examination was performed using parametric variations for three simple benchmark models. This paper presents the comparisons and evaluation of the program participants' results to the BNL exact solutions for the applicable ranges of modeling dynamic characteristic parameters

  1. Parametric distribution approach for flow availability in small hydro potential analysis

    Science.gov (United States)

    Abdullah, Samizee; Basri, Mohd Juhari Mat; Jamaluddin, Zahrul Zamri; Azrulhisham, Engku Ahmad; Othman, Jamel

    2016-10-01

    Small hydro system is one of the important sources of renewable energy and it has been recognized worldwide as clean energy sources. Small hydropower generation system uses the potential energy in flowing water to produce electricity is often questionable due to inconsistent and intermittent of power generated. Potential analysis of small hydro system which is mainly dependent on the availability of water requires the knowledge of water flow or stream flow distribution. This paper presented the possibility of applying Pearson system for stream flow availability distribution approximation in the small hydro system. By considering the stochastic nature of stream flow, the Pearson parametric distribution approximation was computed based on the significant characteristic of Pearson system applying direct correlation between the first four statistical moments of the distribution. The advantage of applying various statistical moments in small hydro potential analysis will have the ability to analyze the variation shapes of stream flow distribution.

  2. Characterizing Heterogeneity within Head and Neck Lesions Using Cluster Analysis of Multi-Parametric MRI Data.

    Directory of Open Access Journals (Sweden)

    Marco Borri

    Full Text Available To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment.The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4. Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters.The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4, determined with cluster validation, produced the best separation between reducing and non-reducing clusters.The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes.

  3. Parametric mapping using spectral analysis for 11C-PBR28 PET reveals neuroinflammation in mild cognitive impairment subjects.

    Science.gov (United States)

    Fan, Zhen; Dani, Melanie; Femminella, Grazia D; Wood, Melanie; Calsolaro, Valeria; Veronese, Mattia; Turkheimer, Federico; Gentleman, Steve; Brooks, David J; Hinz, Rainer; Edison, Paul

    2018-07-01

    Neuroinflammation and microglial activation play an important role in amnestic mild cognitive impairment (MCI) and Alzheimer's disease. In this study, we investigated the spatial distribution of neuroinflammation in MCI subjects, using spectral analysis (SA) to generate parametric maps and quantify 11 C-PBR28 PET, and compared these with compartmental and other kinetic models of quantification. Thirteen MCI and nine healthy controls were enrolled in this study. Subjects underwent 11 C-PBR28 PET scans with arterial cannulation. Spectral analysis with an arterial plasma input function was used to generate 11 C-PBR28 parametric maps. These maps were then compared with regional 11 C-PBR28 V T (volume of distribution) using a two-tissue compartment model and Logan graphic analysis. Amyloid load was also assessed with 18 F-Flutemetamol PET. With SA, three component peaks were identified in addition to blood volume. The 11 C-PBR28 impulse response function (IRF) at 90 min produced the lowest coefficient of variation. Single-subject analysis using this IRF demonstrated microglial activation in five out of seven amyloid-positive MCI subjects. IRF parametric maps of 11 C-PBR28 uptake revealed a group-wise significant increase in neuroinflammation in amyloid-positive MCI subjects versus HC in multiple cortical association areas, and particularly in the temporal lobe. Interestingly, compartmental analysis detected group-wise increase in 11 C-PBR28 binding in the thalamus of amyloid-positive MCI subjects, while Logan parametric maps did not perform well. This study demonstrates for the first time that spectral analysis can be used to generate parametric maps of 11 C-PBR28 uptake, and is able to detect microglial activation in amyloid-positive MCI subjects. IRF parametric maps of 11 C-PBR28 uptake allow voxel-wise single-subject analysis and could be used to evaluate microglial activation in individual subjects.

  4. Parametric analysis of technology and policy tradeoffs for conventional and electric light-duty vehicles

    International Nuclear Information System (INIS)

    Barter, Garrett E.; Reichmuth, David; Westbrook, Jessica; Malczynski, Leonard A.; West, Todd H.; Manley, Dawn K.; Guzman, Katherine D.; Edwards, Donna M.

    2012-01-01

    A parametric analysis is used to examine the supply demand interactions between the US light-duty vehicle (LDV) fleet, its fuels, and the corresponding primary energy sources through 2050. The analysis emphasizes competition between conventional internal combustion engine (ICE) vehicles, including hybrids, and electric vehicles (EVs), represented by both plug-in hybrid and battery electric vehicles. We find that EV market penetration could double relative to our baseline case with policies to extend consumers' effective payback period to 7 years. EVs can also reduce per vehicle petroleum consumption by up to 5% with opportunities to increase that fraction at higher adoption rates. However, EVs have limited ability to reduce LDV greenhouse gas (GHG) emissions with the current energy source mix. Alone, EVs cannot drive compliance with the most aggressive GHG emission reduction targets, even if the electricity grid shifts towards natural gas powered sources. Since ICEs will dominate the LDV fleet for up to 40 years, conventional vehicle efficiency improvements have the greatest potential for reductions in LDV GHG emissions and petroleum consumption over this time. Specifically, achieving fleet average efficiencies of 72 mpg or greater can reduce average GHG emissions by 70% and average petroleum consumption by 81%. - Highlights: ► Parametric analysis of the light duty vehicle fleet, its fuels, and energy sources. ► Conventional vehicles will dominate the fleet for up to 40 years. ► Improving gasoline powertrain efficiency is essential for GHG and oil use reduction. ► Electric vehicles have limited leverage over GHG emissions with the current grid mix. ► Consumer payback period extensions can double electric vehicle market share.

  5. Using Spline Regression in Semi-Parametric Stochastic Frontier Analysis: An Application to Polish Dairy Farms

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    of specifying an unsuitable functional form and thus, model misspecification and biased parameter estimates. Given these problems of the DEA and the SFA, Fan, Li and Weersink (1996) proposed a semi-parametric stochastic frontier model that estimates the production function (frontier) by non......), Kumbhakar et al. (2007), and Henningsen and Kumbhakar (2009). The aim of this paper and its main contribution to the existing literature is the estimation semi-parametric stochastic frontier models using a different non-parametric estimation technique: spline regression (Ma et al. 2011). We apply...... efficiency of Polish dairy farms contributes to the insight into this dynamic process. Furthermore, we compare and evaluate the results of this spline-based semi-parametric stochastic frontier model with results of other semi-parametric stochastic frontier models and of traditional parametric stochastic...

  6. An adaptive least-squares global sensitivity method and application to a plasma-coupled combustion prediction with parametric correlation

    Science.gov (United States)

    Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.

    2018-05-01

    We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total

  7. Decentralized control of large-scale systems: Fixed modes, sensitivity and parametric robustness. Ph.D. Thesis - Universite Paul Sabatier, 1985

    Science.gov (United States)

    Tarras, A.

    1987-01-01

    The problem of stabilization/pole placement under structural constraints of large scale linear systems is discussed. The existence of a solution to this problem is expressed in terms of fixed modes. The aim is to provide a bibliographic survey of the available results concerning the fixed modes (characterization, elimination, control structure selection to avoid them, control design in their absence) and to present the author's contribution to this problem which can be summarized by the use of the mode sensitivity concept to detect or to avoid them, the use of vibrational control to stabilize them, and the addition of parametric robustness considerations to design an optimal decentralized robust control.

  8. Spectrophotometry of zirconocene-polymethyl alumoxan catalytic systems: analysis of main components and parametric simulation

    International Nuclear Information System (INIS)

    Ryabenko, A.G.; Fajngol'd, E.E.; Ushakov, E.N.; Bravaya, N.M.

    2005-01-01

    Transformation of electronic spectra of absorption of zirconocene catalytic systems Ph 2 CCpFluZrCl 2 -poly methyl-alumoxan (MAO) and rac-Me 2 Si(2-Me,4-PhInd) 2 ZrCl 2 -MAO (Flu-fluorenyl, Ind-indenyl) in toluene as ratio of reagents Al MAO /Zr changes from 0 to 3000 mol mol -1 . Analysis of spectroscopic data with using the statistical methods made possible recognition of number of products of reaction in every system. Reaction model involving three equilibrium is suggested. Effective values of equilibrium constants and absorption spectra of separate products of reactions were evaluated by means of parametric self-simulation of experimental spectra [ru

  9. PARAMETRIC ANALYSIS OF A MINIATURIZED INVERTED II SHAPED ANTENNA FOR WIRELESS SENSOR NETWORK APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. Shanmugapriya

    2015-06-01

    Full Text Available A compact and simple design of a CPW-fed planar antenna for wireless sensor network antenna application with a better size reduction is presented. The proposed antenna consists of an inverted ? shaped metal patch on a printed circuit board fed by a 50-O coplanar waveguide (CPW. The parametric analysis of length and width are made. The designed antenna’s physical dimensions are 32 mm (length x 26 mm (width x 1.6 mm (height. The antenna structure has been modeled and fabricated and its performance has been evaluated using a method of moment based electromagnetic simulator, IE3D .The return loss of -22.5 dB and VSWR of 1.34 dB are noted. The radiation pattern of the antenna proves that it radiates in all direction. The antenna is fabricated and tested and the measured results go in good agreement with simulated one.

  10. Structure of the alexithymic brain: A parametric coordinate-based meta-analysis.

    Science.gov (United States)

    Xu, Pengfei; Opmeer, Esther M; van Tol, Marie-José; Goerlich, Katharina S; Aleman, André

    2018-04-01

    Alexithymia refers to deficiencies in identifying and expressing emotions. This might be related to changes in structural brain volumes, but its neuroanatomical basis remains uncertain as studies have shown heterogeneous findings. Therefore, we conducted a parametric coordinate-based meta-analysis. We identified seventeen structural neuroimaging studies (including a total of 2586 individuals with different levels of alexithymia) investigating the association between gray matter volume and alexithymia. Volumes of the left insula, left amygdala, orbital frontal cortex and striatum were consistently smaller in people with high levels of alexithymia. These areas are important for emotion perception and emotional experience. Smaller volumes in these areas might lead to deficiencies in appropriately identifying and expressing emotions. These findings provide the first quantitative integration of results pertaining to the structural neuroanatomical basis of alexithymia. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Parametric analysis and optimization for a combined power and refrigeration cycle

    International Nuclear Information System (INIS)

    Wang Jiangfeng; Dai Yiping; Gao Lin

    2008-01-01

    A combined power and refrigeration cycle is proposed, which combines the Rankine cycle and the absorption refrigeration cycle. This combined cycle uses a binary ammonia-water mixture as the working fluid and produces both power output and refrigeration output simultaneously with only one heat source. A parametric analysis is conducted to evaluate the effects of thermodynamic parameters on the performance of the combined cycle. It is shown that heat source temperature, environment temperature, refrigeration temperature, turbine inlet pressure, turbine inlet temperature, and basic solution ammonia concentration have significant effects on the net power output, refrigeration output and exergy efficiency of the combined cycle. A parameter optimization is achieved by means of genetic algorithm to reach the maximum exergy efficiency. The optimized exergy efficiency is 43.06% under the given condition

  12. Parametric analysis for a new combined power and ejector-absorption refrigeration cycle

    International Nuclear Information System (INIS)

    Wang Jiangfeng; Dai Yiping; Zhang Taiyong; Ma Shaolin

    2009-01-01

    A new combined power and ejector-absorption refrigeration cycle is proposed, which combines the Rankine cycle and the ejector-absorption refrigeration cycle, and could produce both power output and refrigeration output simultaneously. This combined cycle, which originates from the cycle proposed by authors previously, introduces an ejector between the rectifier and the condenser, and provides a performance improvement without greatly increasing the complexity of the system. A parametric analysis is conducted to evaluate the effects of the key thermodynamic parameters on the cycle performance. It is shown that heat source temperature, condenser temperature, evaporator temperature, turbine inlet pressure, turbine inlet temperature, and basic solution ammonia concentration have significant effects on the net power output, refrigeration output and exergy efficiency of the combined cycle. It is evident that the ejector can improve the performance of the combined cycle proposed by authors previously.

  13. Risk Characterization uncertainties associated description, sensitivity analysis

    International Nuclear Information System (INIS)

    Carrillo, M.; Tovar, M.; Alvarez, J.; Arraez, M.; Hordziejewicz, I.; Loreto, I.

    2013-01-01

    The power point presentation is about risks to the estimated levels of exposure, uncertainty and variability in the analysis, sensitivity analysis, risks from exposure to multiple substances, formulation of guidelines for carcinogenic and genotoxic compounds and risk subpopulations

  14. Effects of registration error on parametric response map analysis: a simulation study using liver CT-perfusion images

    International Nuclear Information System (INIS)

    Lausch, A; Lee, T Y; Wong, E; Jensen, N K G; Chen, J; Lock, M

    2014-01-01

    Purpose: To investigate the effects of registration error (RE) on parametric response map (PRM) analysis of pre and post-radiotherapy (RT) functional images. Methods: Arterial blood flow maps (ABF) were generated from the CT-perfusion scans of 5 patients with hepatocellular carcinoma. ABF values within each patient map were modified to produce seven new ABF maps simulating 7 distinct post-RT functional change scenarios. Ground truth PRMs were generated for each patient by comparing the simulated and original ABF maps. Each simulated ABF map was then deformed by different magnitudes of realistic respiratory motion in order to simulate RE. PRMs were generated for each of the deformed maps and then compared to the ground truth PRMs to produce estimates of RE-induced misclassification. Main findings: The percentage of voxels misclassified as decreasing, no change, and increasing, increased with RE For all patients, increasing RE was observed to increase the number of high post-RT ABF voxels associated with low pre-RT ABF voxels and vice versa. 3 mm of average tumour RE resulted in 18-45% tumour voxel misclassification rates. Conclusions: RE induced misclassification posed challenges for PRM analysis in the liver where registration accuracy tends to be lower. Quantitative understanding of the sensitivity of the PRM method to registration error is required if PRMs are to be used to guide radiation therapy dose painting techniques.

  15. Object-sensitive Type Analysis of PHP

    NARCIS (Netherlands)

    Van der Hoek, Henk Erik; Hage, J

    2015-01-01

    In this paper we develop an object-sensitive type analysis for PHP, based on an extension of the notion of monotone frameworks to deal with the dynamic aspects of PHP, and following the framework of Smaragdakis et al. for object-sensitive analysis. We consider a number of instantiations of the

  16. A non-parametric meta-analysis approach for combining independent microarray datasets: application using two microarray datasets pertaining to chronic allograft nephropathy

    Directory of Open Access Journals (Sweden)

    Archer Kellie J

    2008-02-01

    Full Text Available Abstract Background With the popularity of DNA microarray technology, multiple groups of researchers have studied the gene expression of similar biological conditions. Different methods have been developed to integrate the results from various microarray studies, though most of them rely on distributional assumptions, such as the t-statistic based, mixed-effects model, or Bayesian model methods. However, often the sample size for each individual microarray experiment is small. Therefore, in this paper we present a non-parametric meta-analysis approach for combining data from independent microarray studies, and illustrate its application on two independent Affymetrix GeneChip studies that compared the gene expression of biopsies from kidney transplant recipients with chronic allograft nephropathy (CAN to those with normal functioning allograft. Results The simulation study comparing the non-parametric meta-analysis approach to a commonly used t-statistic based approach shows that the non-parametric approach has better sensitivity and specificity. For the application on the two CAN studies, we identified 309 distinct genes that expressed differently in CAN. By applying Fisher's exact test to identify enriched KEGG pathways among those genes called differentially expressed, we found 6 KEGG pathways to be over-represented among the identified genes. We used the expression measurements of the identified genes as predictors to predict the class labels for 6 additional biopsy samples, and the predicted results all conformed to their pathologist diagnosed class labels. Conclusion We present a new approach for combining data from multiple independent microarray studies. This approach is non-parametric and does not rely on any distributional assumptions. The rationale behind the approach is logically intuitive and can be easily understood by researchers not having advanced training in statistics. Some of the identified genes and pathways have been

  17. On the sensitivity of teleseismic full-waveform inversion to earth parametrization, initial model and acquisition design

    Science.gov (United States)

    Beller, S.; Monteiller, V.; Combe, L.; Operto, S.; Nolet, G.

    2018-02-01

    Full-waveform inversion (FWI) is not yet a mature imaging technology for lithospheric imaging from teleseismic data. Therefore, its promise and pitfalls need to be assessed more accurately according to the specifications of teleseismic experiments. Three important issues are related to (1) the choice of the lithospheric parametrization for optimization and visualization, (2) the initial model and (3) the acquisition design, in particular in terms of receiver spread and sampling. These three issues are investigated with a realistic synthetic example inspired by the CIFALPS experiment in the Western Alps. Isotropic elastic FWI is implemented with an adjoint-state formalism and aims to update three parameter classes by minimization of a classical least-squares difference-based misfit function. Three different subsurface parametrizations, combining density (ρ) with P and S wave speeds (Vp and Vs) , P and S impedances (Ip and Is), or elastic moduli (λ and μ) are first discussed based on their radiation patterns before their assessment by FWI. We conclude that the (ρ, λ, μ) parametrization provides the FWI models that best correlate with the true ones after recombining a posteriori the (ρ, λ, μ) optimization parameters into Ip and Is. Owing to the low frequency content of teleseismic data, 1-D reference global models as PREM provide sufficiently accurate initial models for FWI after smoothing that is necessary to remove the imprint of the layering. Two kinds of station deployments are assessed: coarse areal geometry versus dense linear one. We unambiguously conclude that a coarse areal geometry should be favoured as it dramatically increases the penetration in depth of the imaging as well as the horizontal resolution. This results because the areal geometry significantly increases local wavenumber coverage, through a broader sampling of the scattering and dip angles, compared to a linear deployment.

  18. A hybrid approach for global sensitivity analysis

    International Nuclear Information System (INIS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2017-01-01

    Distribution based sensitivity analysis (DSA) computes sensitivity of the input random variables with respect to the change in distribution of output response. Although DSA is widely appreciated as the best tool for sensitivity analysis, the computational issue associated with this method prohibits its use for complex structures involving costly finite element analysis. For addressing this issue, this paper presents a method that couples polynomial correlated function expansion (PCFE) with DSA. PCFE is a fully equivalent operational model which integrates the concepts of analysis of variance decomposition, extended bases and homotopy algorithm. By integrating PCFE into DSA, it is possible to considerably alleviate the computational burden. Three examples are presented to demonstrate the performance of the proposed approach for sensitivity analysis. For all the problems, proposed approach yields excellent results with significantly reduced computational effort. The results obtained, to some extent, indicate that proposed approach can be utilized for sensitivity analysis of large scale structures. - Highlights: • A hybrid approach for global sensitivity analysis is proposed. • Proposed approach integrates PCFE within distribution based sensitivity analysis. • Proposed approach is highly efficient.

  19. Parametric thermo-hydraulic analysis of the TF system of JT-60SA during fast discharge

    International Nuclear Information System (INIS)

    Polli, Gian Mario; Lacroix, Benoit; Zani, Louis; Besi Vetrella, Ugo; Cucchiaro, Antonio

    2013-01-01

    Highlights: • We modeled the central clock-wise pancake of JT-60SA TF magnet at the EOB. • We simulated a quench followed by a fast discharge. • We evaluated the temperature and pressure rises in the nominal configuration. • We evaluated the effect of several parameter changes on the thermal-hydraulic response of the system. -- Abstract: The evolution of the conductor temperature and of the helium pressure of the central pancake of the TF superconducting magnet of the JT-60SA tokamak in a quench scenario are here discussed. The quench is triggered by a heat disturbance applied at the end of burning and followed by a fast safety discharge. A parametric study aimed at assessing the robustness of the calculation is also addressed with special regard to the voltage threshold, used to define the occurrence of the quench, and to the time delay, that cover all the possible delays in the fast discharge after quench detection. Finally, due to sensitivity analyses the influences of different parameters were assessed: the material properties of the strands (RRR, copper fraction), the magnitude and the spatial length of the triggering disturbance and the magnetic field distribution. The numerical evaluations were performed in the framework of the Broader Approach Agreement in collaboration with CEA, ENEA and the JT-60SA European Home Team using the 1D code Gandalf [1

  20. Adaptive GSA-based optimal tuning of PI controlled servo systems with reduced process parametric sensitivity, robust stability and controller robustness.

    Science.gov (United States)

    Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan

    2014-11-01

    This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system.

  1. Parametric analysis of the thermal effects on the divertor in tokamaks during plasma disruptions

    International Nuclear Information System (INIS)

    Bruhn, M.L.

    1988-04-01

    Plasma disruptions are an ever present danger to the plasma-facing components in today's tokamak fusion reactors. This threat results from our lack of understanding and limited ability to control this complex phenomenon. In particular, severe energy deposition occurs on the divertor component of the double-null configured tokamak reactor during such disruptions. A hybrid computational model developed to estimate and graphically illustrate global thermal effects of disruptions on the divertor plates is described in detail. The quasi-two-dimensional computer code, TADDPAK (Thermal Analysis Divertor during Disruptions PAcKage), is used to conduct parametric analysis for the TIBER II Tokamak Engineering Test Reactor Design. The dependence of these thermal effects on divertor material choice, disruption pulse length, disruption pulse shape, and the characteristic thickness of the plasma scrape-off layer is investigated for this reactor design. Results and conclusions from this analysis are presented. Improvements to this model and issues that require further investigation are discussed. Cursory analysis for ITER (International Thermonuclear Experimental Reactor) is also presented in the appendix. 75 refs., 49 figs., 10 tabs

  2. A method of statistical analysis in the field of sports science when assumptions of parametric tests are not violated

    OpenAIRE

    Sandurska, Elżbieta; Szulc, Aleksandra

    2016-01-01

    Sandurska Elżbieta, Szulc Aleksandra. A method of statistical analysis in the field of sports science when assumptions of parametric tests are not violated. Journal of Education Health and Sport. 2016;6(13):275-287. eISSN 2391-8306. DOI http://dx.doi.org/10.5281/zenodo.293762 http://ojs.ukw.edu.pl/index.php/johs/article/view/4278 The journal has had 7 points in Ministry of Science and Higher Education parametric evaluation. Part B item 754 (09.12.2016). 754 Journal...

  3. Sensitivity analysis of a PWR pressurizer

    International Nuclear Information System (INIS)

    Bruel, Renata Nunes

    1997-01-01

    A sensitivity analysis relative to the parameters and modelling of the physical process in a PWR pressurizer has been performed. The sensitivity analysis was developed by implementing the key parameters and theoretical model lings which generated a comprehensive matrix of influences of each changes analysed. The major influences that have been observed were the flashing phenomenon and the steam condensation on the spray drops. The present analysis is also applicable to the several theoretical and experimental areas. (author)

  4. Parametric analysis of LIBRETTO-4 and 5 in-pile tritium transport model on EcosimPro

    Energy Technology Data Exchange (ETDEWEB)

    Alcalde, Pablo Martínez, E-mail: pablomiguel.martinez@externos.ciemat.es [Universidad Nacional de Educación a Distancia (UNED), c/Juan del Rosal 12, 28040 Madrid (Spain); Moreno, Carlos; Ibarra, Ángel [CIEMAT, Avda. Complutense 40, 28040 Madrid (Spain)

    2014-10-15

    Highlights: • Introduction of a new tritium transport model of LIBRETTO-4 and 5 on EcosimPro{sup ®}. • Analysis of model input parameter and variable sensitivities and effects on tritium simulated fluxes. • Demonstrations of high tritium out-flux dependencies on lead-lithium parameters. • Rough fitting achievements proposed by Li17Pb solubility or recombination increase. - Abstract: A new model for LIBRETTO-4/1, 4/2 and 5 experiments have been developed on ECOSIMPro{sup ©} tool to simulate tritium in-pile breeding and transport into two separate purge gas channels with He + 0.1%H{sub 2}. Release from lead lithium eutectic plenum with coupled permeation through an austenitic steel wall on the first and single permeation through EUROFER-97 in the temperature ranges of 300–550 °C can be simulated tuning the transport parameters involved. A parametric study has been performed to reduce the degrees of freedom and to determine the error caused in the simulation due to the uncertainty in experimental input data. The information obtained is essential for the experimental benchmarking. The Tritium Permeation Percentage (TPP) is an output calculated parameter with low variations between 2 and 6% along the whole experimental time easy to compare (730 Full Power Days for LIBRETTO-4 and 520 for 5). Tritium transport parameter ranges verifying this output are defined herein.

  5. Parametric analysis of energy quality management for district in China using multi-objective optimization approach

    International Nuclear Information System (INIS)

    Lu, Hai; Yu, Zitao; Alanne, Kari; Xu, Xu; Fan, Liwu; Yu, Han; Zhang, Liang; Martinac, Ivo

    2014-01-01

    Highlights: • A time-effective multi-objective design optimization scheme is proposed. • The scheme aims at exploring suitable 3E energy system for the specific case. • A realistic case located in China is used for the analysis. • Parametric study is investigated to test the effects of different parameters. - Abstract: Due to the increasing energy demands and global warming, energy quality management (EQM) for districts has been getting importance over the last few decades. The evaluation of the optimum energy systems for specific districts is an essential part of EQM. This paper presents a deep analysis of the optimum energy systems for a district sited in China. A multi-objective optimization approach based on Genetic Algorithm (GA) is proposed for the analysis. The optimization process aims to search for the suitable 3E (minimum economic cost and environmental burden as well as maximum efficiency) energy systems. Here, life cycle CO 2 equivalent (LCCO 2 ), life cycle cost (LCC) and exergy efficiency (EE) are set as optimization objectives. Then, the optimum energy systems for the Chinese case are presented. The final work is to investigate the effects of different energy parameters. The results show the optimum energy systems might vary significantly depending on some parameters

  6. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  7. Sensitivity analysis in life cycle assessment

    NARCIS (Netherlands)

    Groen, E.A.; Heijungs, R.; Bokkers, E.A.M.; Boer, de I.J.M.

    2014-01-01

    Life cycle assessments require many input parameters and many of these parameters are uncertain; therefore, a sensitivity analysis is an essential part of the final interpretation. The aim of this study is to compare seven sensitivity methods applied to three types of case stud-ies. Two

  8. Robust Stability Clearance of Flight Control Law Based on Global Sensitivity Analysis

    OpenAIRE

    Ou, Liuli; Liu, Lei; Dong, Shuai; Wang, Yongji

    2014-01-01

    To validate the robust stability of the flight control system of hypersonic flight vehicle, which suffers from a large number of parametrical uncertainties, a new clearance framework based on structural singular value ( $\\mu $ ) theory and global uncertainty sensitivity analysis (SA) is proposed. In this framework, SA serves as the preprocess of uncertain model to be analysed to help engineers to determine which uncertainties affect the stability of the closed loop system more slightly. By ig...

  9. A parametric FE modeling of brake for non-linear analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed,Ibrahim; Fatouh, Yasser [Automotive and Tractors Technology Department, Faculty of Industrial Education, Helwan University, Cairo (Egypt); Aly, Wael [Refrigeration and Air-Conditioning Technology Department, Faculty of Industrial Education, Helwan University, Cairo (Egypt)

    2013-07-01

    A parametric modeling of a drum brake based on 3-D Finite Element Methods (FEM) for non-contact analysis is presented. Many parameters are examined during this study such as the effect of drum-lining interface stiffness, coefficient of friction, and line pressure on the interface contact. Firstly, the modal analysis of the drum brake is also studied to get the natural frequency and instability of the drum to facilitate transforming the modal elements to non-contact elements. It is shown that the Unsymmetric solver of the modal analysis is efficient enough to solve this linear problem after transforming the non-linear behavior of the contact between the drum and the lining to a linear behavior. A SOLID45 which is a linear element is used in the modal analysis and then transferred to non-linear elements which are Targe170 and Conta173 that represent the drum and lining for contact analysis study. The contact analysis problems are highly non-linear and require significant computer resources to solve it, however, the contact problem give two significant difficulties. Firstly, the region of contact is not known based on the boundary conditions such as line pressure, and drum and friction material specs. Secondly, these contact problems need to take the friction into consideration. Finally, it showed a good distribution of the nodal reaction forces on the slotted lining contact surface and existing of the slot in the middle of the lining can help in wear removal due to the friction between the lining and the drum. Accurate contact stiffness can give a good representation for the pressure distribution between the lining and the drum. However, a full contact of the front part of the slotted lining could occur in case of 20, 40, 60 and 80 bar of piston pressure and a partially contact between the drum and lining can occur in the rear part of the slotted lining.

  10. Ethical sensitivity in professional practice: concept analysis.

    Science.gov (United States)

    Weaver, Kathryn; Morse, Janice; Mitcham, Carl

    2008-06-01

    This paper is a report of a concept analysis of ethical sensitivity. Ethical sensitivity enables nurses and other professionals to respond morally to the suffering and vulnerability of those receiving professional care and services. Because of its significance to nursing and other professional practices, ethical sensitivity deserves more focused analysis. A criteria-based method oriented toward pragmatic utility guided the analysis of 200 papers and books from the fields of nursing, medicine, psychology, dentistry, clinical ethics, theology, education, law, accounting or business, journalism, philosophy, political and social sciences and women's studies. This literature spanned 1970 to 2006 and was sorted by discipline and concept dimensions and examined for concept structure and use across various contexts. The analysis was completed in September 2007. Ethical sensitivity in professional practice develops in contexts of uncertainty, client suffering and vulnerability, and through relationships characterized by receptivity, responsiveness and courage on the part of professionals. Essential attributes of ethical sensitivity are identified as moral perception, affectivity and dividing loyalties. Outcomes include integrity preserving decision-making, comfort and well-being, learning and professional transcendence. Our findings promote ethical sensitivity as a type of practical wisdom that pursues client comfort and professional satisfaction with care delivery. The analysis and resulting model offers an inclusive view of ethical sensitivity that addresses some of the limitations with prior conceptualizations.

  11. Parametric Analysis of a Two-Shaft Aeroderivate Gas Turbine of 11.86 MW

    Directory of Open Access Journals (Sweden)

    R. Lugo-Leyte

    2015-08-01

    Full Text Available The aeroderivate gas turbines are widely used for power generation in the oil and gas industry. In offshore marine platforms, the aeroderivative gas turbines provide the energy required to drive mechanically compressors, pumps and electric generators. Therefore, the study of the performance of aeroderivate gas turbines based on a parametric analysis is relevant to carry out a diagnostic of the engine, which can lead to operational as well as predictive and/or corrective maintenance actions. This work presents a methodology based on the exergetic analysis to estimate the irrevesibilities and exergetic efficiencies of the main components of a two-shaft aeroderivate gas turbine. The studied engine is the Solar Turbine Mars 100, which is rated to provide 11.86 MW. In this engine, the air is compressed in an axial compressor achieving a pressure ratio of 17.7 relative to ambient conditions and a high pressure turbine inlet temperature of 1220 °C. Even if the thermal efficiency associated to the pressure ratio of 17.7 is 1% lower than the maximum thermal efficiency, the irreversibilities related to this pressure ratio decrease approximately 1 GW with respect to irreversibilities of the optimal pressure ratio for the thermal efficiency. In addition, this paper contributes to develop a mathematical model to estimate the high turbine inlet temperature as well as the pressure ratio of the low and high pressure turbines.

  12. Parametrical analysis of latent heat and cold storage for heating and cooling of rooms

    International Nuclear Information System (INIS)

    Osterman, E.; Hagel, K.; Rathgeber, C.; Butala, V.; Stritih, U.

    2015-01-01

    One of the problems we are facing today is the energy consumption minimization, while maintaining the indoor thermal comfort in buildings. A potential solution to this issue is use of phase change materials (PCMs) in thermal energy storage (TES), where cold gets accumulated during the summer nights in order to reduce cooling load during the day. In winter, on the other hand, heat from solar air collector is stored for evening and morning hours when solar radiation is not available. The main objective of the paper is to examine experimentally whether it is possible to use such a storage unit for heating as well as for cooling. For this purpose 30 plates filled with paraffin (melting point around 22°C) were positioned into TES and applied with the same initial and boundary conditions as they are expected in reality. Experimental work covered flow visualization, measurements of air velocity in the channels between the plates, parametric analysis in conjunction with TES thermal response and measurements of the pressure drops. The results indicate that this type of storage technology could be advantageously used in real conditions. For optimized thermal behavior, only plate thickness should be reduced. - Highlights: • Thermal properties of paraffin RT22HC were measured. • Flow visualization was carried out and velocity between plates was measured. • Thermal and pressure drop analysis were performed. • Melting times are too long however, use of storage tank for heating and cooling looks promising

  13. Parametric analysis of the curved slats fixed mirror solar concentrator for medium temperature applications

    International Nuclear Information System (INIS)

    Pujol-Nadal, Ramon; Martínez-Moll, Víctor

    2014-01-01

    Highlights: • We thermally modeled the Curved Slats Fixed Mirror Solar Concentrator (CSFMSC). • A parametric analysis for three climates and two axial orientations are given. • The optimum values are determined for a range of the design parameters. • The CSFMSC has been well characterized for medium range temperature operation. - Abstract: The Curved Slats Fixed Mirror Solar Concentrator (CSFMSC) is a solar concentrator with a static reflector and a moving receiver. An optical analysis using ray-tracing tools was presented in a previous study in function of three design parameters: the number of mirrors N, the ratio of focal length and reflector width F/W, and the aperture concentration C a . However, less is known about the thermal behavior of this geometry. In this communication, the integrated thermal output of the CSFMSC has been determined in order to find the optimal values for the design parameters at a working temperature of 200 °C. The results were obtained for three different climates and two axial orientations (North–South, and East–West). The results show that CSFMSC can produce heat at 200 °C with an annual thermal efficiency of 41, 47, and 51%, dependent of the location considered (Munich, Palma de Mallorca, and Cairo). The best FMSC geometries in function of the design parameters are exhibited for medium temperature applications

  14. Quantitative analysis of diffusion tensor imaging (DTI) using statistical parametric mapping (SPM) for brain disorders

    Science.gov (United States)

    Lee, Jae-Seung; Im, In-Chul; Kang, Su-Man; Goo, Eun-Hoe; Kwak, Byung-Joon

    2013-07-01

    This study aimed to quantitatively analyze data from diffusion tensor imaging (DTI) using statistical parametric mapping (SPM) in patients with brain disorders and to assess its potential utility for analyzing brain function. DTI was obtained by performing 3.0-T magnetic resonance imaging for patients with Alzheimer's disease (AD) and vascular dementia (VD), and the data were analyzed using Matlab-based SPM software. The two-sample t-test was used for error analysis of the location of the activated pixels. We compared regions of white matter where the fractional anisotropy (FA) values were low and the apparent diffusion coefficients (ADCs) were increased. In the AD group, the FA values were low in the right superior temporal gyrus, right inferior temporal gyrus, right sub-lobar insula, and right occipital lingual gyrus whereas the ADCs were significantly increased in the right inferior frontal gyrus and right middle frontal gyrus. In the VD group, the FA values were low in the right superior temporal gyrus, right inferior temporal gyrus, right limbic cingulate gyrus, and right sub-lobar caudate tail whereas the ADCs were significantly increased in the left lateral globus pallidus and left medial globus pallidus. In conclusion by using DTI and SPM analysis, we were able to not only determine the structural state of the regions affected by brain disorders but also quantitatively analyze and assess brain function.

  15. Infinitesimal-area 2D radiative analysis using parametric surface representation, through NURBS

    Energy Technology Data Exchange (ETDEWEB)

    Daun, K J; Hollands, K G.T.

    1999-07-01

    The use of form factors in the treatment of radiant enclosures requires that the radiosity and surface properties be treated as uniform over finite areas. This restriction can be relaxed by applying an infinitesimal-area analysis, where the radiant exchange is taken to be between infinitesimal areas, rather than finite areas. This paper presents a generic infinitesimal-area formulation that can be applied to two-dimensional enclosure problems. (Previous infinitesimal-area analyses have largely been restricted to specific, one-dimensional problems.) Specifically, the paper shows how the analytical expression for the kernel of the integral equation can be obtained without human intervention, once the enclosure surface has been defined parametrically. This can be accomplished by using a computer algebra package or by using NURBS algorithms, which are the industry standard for the geometrical representations used in CAD-CAM codes. Once the kernel has been obtained by this formalism, the 2D integral equation can be set up and solved numerically. The result is a single general-purpose infinitesimal-area analysis code that can proceed from surface specification to solution. The authors have implemented this 2D code and tested it on 1D problems, whose solutions have been given in the literature, obtaining agreement commensurate with the accuracy of the published solutions.

  16. LBLOCA sensitivity analysis using meta models

    International Nuclear Information System (INIS)

    Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.

    2014-01-01

    This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)

  17. Theoretical Analysis of Penalized Maximum-Likelihood Patlak Parametric Image Reconstruction in Dynamic PET for Lesion Detection.

    Science.gov (United States)

    Yang, Li; Wang, Guobao; Qi, Jinyi

    2016-04-01

    Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.

  18. International comparisons of the technical efficiency of the hospital sector: panel data analysis of OECD countries using parametric and non-parametric approaches.

    Science.gov (United States)

    Varabyova, Yauheniya; Schreyögg, Jonas

    2013-09-01

    There is a growing interest in the cross-country comparisons of the performance of national health care systems. The present work provides a comparison of the technical efficiency of the hospital sector using unbalanced panel data from OECD countries over the period 2000-2009. The estimation of the technical efficiency of the hospital sector is performed using nonparametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Internal and external validity of findings is assessed by estimating the Spearman rank correlations between the results obtained in different model specifications. The panel-data analyses using two-step DEA and one-stage SFA show that countries, which have higher health care expenditure per capita, tend to have a more technically efficient hospital sector. Whether the expenditure is financed through private or public sources is not related to the technical efficiency of the hospital sector. On the other hand, the hospital sector in countries with higher income inequality and longer average hospital length of stay is less technically efficient. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  19. Sensitivity analysis in optimization and reliability problems

    International Nuclear Information System (INIS)

    Castillo, Enrique; Minguez, Roberto; Castillo, Carmen

    2008-01-01

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods

  20. Sensitivity analysis in optimization and reliability problems

    Energy Technology Data Exchange (ETDEWEB)

    Castillo, Enrique [Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. Castros s/n., 39005 Santander (Spain)], E-mail: castie@unican.es; Minguez, Roberto [Department of Applied Mathematics, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: roberto.minguez@uclm.es; Castillo, Carmen [Department of Civil Engineering, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: mariacarmen.castillo@uclm.es

    2008-12-15

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods.

  1. Techniques for sensitivity analysis of SYVAC results

    International Nuclear Information System (INIS)

    Prust, J.O.

    1985-05-01

    Sensitivity analysis techniques may be required to examine the sensitivity of SYVAC model predictions to the input parameter values, the subjective probability distributions assigned to the input parameters and to the relationship between dose and the probability of fatal cancers plus serious hereditary disease in the first two generations of offspring of a member of the critical group. This report mainly considers techniques for determining the sensitivity of dose and risk to the variable input parameters. The performance of a sensitivity analysis technique may be improved by decomposing the model and data into subsets for analysis, making use of existing information on sensitivity and concentrating sampling in regions the parameter space that generates high doses or risks. A number of sensitivity analysis techniques are reviewed for their application to the SYVAC model including four techniques tested in an earlier study by CAP Scientific for the SYVAC project. This report recommends the development now of a method for evaluating the derivative of dose and parameter value and extending the Kruskal-Wallis technique to test for interactions between parameters. It is also recommended that the sensitivity of the output of each sub-model of SYVAC to input parameter values should be examined. (author)

  2. Multiple predictor smoothing methods for sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  3. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  4. Confidence ellipses: A variation based on parametric bootstrapping applicable on Multiple Factor Analysis results for rapid graphical evaluation

    DEFF Research Database (Denmark)

    Dehlholm, Christian; Brockhoff, Per B.; Bredie, Wender L. P.

    2012-01-01

    A new way of parametric bootstrapping allows similar construction of confidence ellipses applicable on all results from Multiple Factor Analysis obtained from the FactoMineR package in the statistical program R. With this procedure, a similar approach will be applied to Multiple Factor Analysis r...... in different studies performed on the same set of products. In addition, the graphical display of confidence ellipses eases interpretation and communication of results....

  5. Correct approach to consideration of experimental resolution in parametric analysis of scaling violation in deep inelastic lepton-nucleon interaction

    International Nuclear Information System (INIS)

    Ammosov, V.V.; Usubov, Z.U.; Zhigunov, V.P.

    1990-01-01

    A problem of parametric analysis of the scaling violation in deep inelastic lepton-nucleon interactions in the framework of quantum chromodynamics (QCD) is considered. For a correct consideration of the experimental resolution we use the χ 2 -method, which is demonstrated by numeric experiments and analysis of the 15-foot bubble chamber neutrino experimental data. The model parameters obtained in this approach differ noticeably from those obtained earlier. (orig.)

  6. Parametric cost-benefit analysis for the installation of photovoltaic parks in the island of Cyprus

    International Nuclear Information System (INIS)

    Poullikkas, Andreas

    2009-01-01

    In this work a feasibility study is carried out in order to investigate whether the installation of large photovoltaic (PV) parks in Cyprus, in the absence of relevant feed-in tariff or other measures, is economically feasible. The study takes into account the available solar potential of the island of Cyprus as well as all available data concerning current renewable energy sources (RES) policy of the Cyprus Government and the current RES electricity purchasing tariff from Electricity Authority of Cyprus. In order to identify the least-cost feasible option for the installation of 1 MW PV park a parametric cost-benefit analysis is carried out by varying parameters such as PV park orientation, PV park capital investment, carbon dioxide emission trading system price, etc. For all above cases the electricity unit cost or benefit before tax, as well as after-tax cash flow, net present value, internal rate of return and payback period are calculated. The results indicate that capital expenditure of the PV park is a critical parameter for the viability of the project when no feed-in tariff is available. (author)

  7. Brain SPECT analysis using statistical parametric mapping in patients with transient global amnesia

    Energy Technology Data Exchange (ETDEWEB)

    Kim, E. N.; Sohn, H. S.; Kim, S. H; Chung, S. K.; Yang, D. W. [College of Medicine, The Catholic Univ. of Korea, Seoul (Korea, Republic of)

    2001-07-01

    This study investigated alterations in regional cerebral blood flow (rCBF) in patients with transient global amnesia (TGA) using statistical parametric mapping 99 (SPM99). Noninvasive rCBF measurements using 99mTc-ethyl cysteinate dimer (ECD) SPECT were performed on 8 patients with TGA and 17 age matched controls. The relative rCBF maps in patients with TGA and controls were compared. In patients with TGA, significantly decreased rCBF was found along the left superior temporal extending to left parietal region of the brain and left thalamus. There were areas of increased rCBF in the right temporal, right frontal region and right thalamus. We could demonstrate decreased perfusion in left cerebral hemisphere and increased perfusion in right cerebral hemisphere in patients with TGA using SPM99. The reciprocal change of rCBF between right and left cerebral hemisphere in patients with TGA might suggest that imbalanced neuronal activity between the bilateral hemispheres may be important role in the pathogenesis of the TGA. For quantitative SPECT analysis in TGA patients, we recommend SPM99 rather than the ROI method because of its definitive advantages.

  8. Detection of Cracking Levels in Brittle Rocks by Parametric Analysis of the Acoustic Emission Signals

    Science.gov (United States)

    Moradian, Zabihallah; Einstein, Herbert H.; Ballivy, Gerard

    2016-03-01

    Determination of the cracking levels during the crack propagation is one of the key challenges in the field of fracture mechanics of rocks. Acoustic emission (AE) is a technique that has been used to detect cracks as they occur across the specimen. Parametric analysis of AE signals and correlating these parameters (e.g., hits and energy) to stress-strain plots of rocks let us detect cracking levels properly. The number of AE hits is related to the number of cracks, and the AE energy is related to magnitude of the cracking event. For a full understanding of the fracture process in brittle rocks, prismatic specimens of granite containing pre-existing flaws have been tested in uniaxial compression tests, and their cracking process was monitored with both AE and high-speed video imaging. In this paper, the characteristics of the AE parameters and the evolution of cracking sequences are analyzed for every cracking level. Based on micro- and macro-crack damage, a classification of cracking levels is introduced. This classification contains eight stages (1) crack closure, (2) linear elastic deformation, (3) micro-crack initiation (white patch initiation), (4) micro-crack growth (stable crack growth), (5) micro-crack coalescence (macro-crack initiation), (6) macro-crack growth (unstable crack growth), (7) macro-crack coalescence and (8) failure.

  9. A parametric analysis of periodic and coupled heat and mass diffusion in desiccant wheels

    International Nuclear Information System (INIS)

    Nóbrega, Carlos E.L.

    2014-01-01

    Solid sorbents are frequently adopted for gas component separation in the chemical industry. Over the last decades, solid sorbents have also been applied for the benefit of indoor air quality and humidity control in modern building design. Adsorptive rotors have been designed for the removal of water vapor, CO and VOCs from indoor environments. Although the adsorption of water vapor by a specific adsorbent (particularly silica-gel) has been extensively studied, a non-dimensional parametric analysis of humidity adsorption on a nonspecific hygroscopic material appears to be an original contribution to the literature. Accordingly, a mathematical model using non-dimensional parameters is built from energy and mass balances applied to elementary control volumes. The periodic nature of the cyclic adsorption/desorption processes requires an iterative solution, which is carried out by comparing temperature and mass distributions at the onset to the distributions by the end of the cycle. - Highlights: • Fully non-dimensional model of heat and mass transfer in a hygroscopic channel. • Investigation of mass and energy diffusion through the hygroscopic layer. • Analytic modeling of the heat of adsorption

  10. Brain SPECT analysis using statistical parametric mapping in patients with transient global amnesia

    International Nuclear Information System (INIS)

    Kim, E. N.; Sohn, H. S.; Kim, S. H; Chung, S. K.; Yang, D. W.

    2001-01-01

    This study investigated alterations in regional cerebral blood flow (rCBF) in patients with transient global amnesia (TGA) using statistical parametric mapping 99 (SPM99). Noninvasive rCBF measurements using 99mTc-ethyl cysteinate dimer (ECD) SPECT were performed on 8 patients with TGA and 17 age matched controls. The relative rCBF maps in patients with TGA and controls were compared. In patients with TGA, significantly decreased rCBF was found along the left superior temporal extending to left parietal region of the brain and left thalamus. There were areas of increased rCBF in the right temporal, right frontal region and right thalamus. We could demonstrate decreased perfusion in left cerebral hemisphere and increased perfusion in right cerebral hemisphere in patients with TGA using SPM99. The reciprocal change of rCBF between right and left cerebral hemisphere in patients with TGA might suggest that imbalanced neuronal activity between the bilateral hemispheres may be important role in the pathogenesis of the TGA. For quantitative SPECT analysis in TGA patients, we recommend SPM99 rather than the ROI method because of its definitive advantages

  11. Simulation and Parametric Analysis of a Hybrid SOFC-Gas Turbine Power Generation System

    International Nuclear Information System (INIS)

    Hassan, A.M.; Fahmy

    2004-01-01

    Combined SOFC-Gas Turbine Power Generation Systems are aimed to increase the power and efficiency obtained from the technology of using high temperature fuel cells by integrating them with gas turbines. Hybrid systems are considered in the last few years as one of the most promising technologies to obtain electric energy from the natural gas at very high efficiency with a serious potential for commercial use. The use of high temperature allows internal reforming for natural gas and thus disparity of fuel composition is allowed. Also air preheating is performed thanks to the high operating cell temperature as a task of energy integration. In this paper a modeling approach is presented for the fuel cell-gas turbine hybrid power generation systems, to obtain the sofc output voltage, power, and the overall hybrid system efficiency. The system has been simulated using HYSYS, the process simulation software to help improving the process understanding and provide a quick system solution. Parametric analysis is also presented in this paper to discuss the effect of some important SOFC operating parameters on the system performance and efficiency

  12. Transit Timing Observations from Kepler: II. Confirmation of Two Multiplanet Systems via a Non-parametric Correlation Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ford, Eric B.; /Florida U.; Fabrycky, Daniel C.; /Lick Observ.; Steffen, Jason H.; /Fermilab; Carter, Joshua A.; /Harvard-Smithsonian Ctr. Astrophys.; Fressin, Francois; /Harvard-Smithsonian Ctr. Astrophys.; Holman, Matthew J.; /Harvard-Smithsonian Ctr. Astrophys.; Lissauer, Jack J.; /NASA, Ames; Moorhead, Althea V.; /Florida U.; Morehead, Robert C.; /Florida U.; Ragozzine, Darin; /Harvard-Smithsonian Ctr. Astrophys.; Rowe, Jason F.; /NASA, Ames /SETI Inst., Mtn. View /San Diego State U., Astron. Dept.

    2012-01-01

    We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies are in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the transit timing variations of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.

  13. TRANSIT TIMING OBSERVATIONS FROM KEPLER. II. CONFIRMATION OF TWO MULTIPLANET SYSTEMS VIA A NON-PARAMETRIC CORRELATION ANALYSIS

    International Nuclear Information System (INIS)

    Ford, Eric B.; Moorhead, Althea V.; Morehead, Robert C.; Fabrycky, Daniel C.; Steffen, Jason H.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Ragozzine, Darin; Charbonneau, David; Lissauer, Jack J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Burke, Christopher J.; Caldwell, Douglas A.; Welsh, William F.; Allen, Christopher; Batalha, Natalie M.; Buchhave, Lars A.

    2012-01-01

    We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies is in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the TTVs of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple-planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.

  14. TRANSIT TIMING OBSERVATIONS FROM KEPLER. II. CONFIRMATION OF TWO MULTIPLANET SYSTEMS VIA A NON-PARAMETRIC CORRELATION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Ford, Eric B.; Moorhead, Althea V.; Morehead, Robert C. [Astronomy Department, University of Florida, 211 Bryant Space Sciences Center, Gainesville, FL 32611 (United States); Fabrycky, Daniel C. [UCO/Lick Observatory, University of California, Santa Cruz, CA 95064 (United States); Steffen, Jason H. [Fermilab Center for Particle Astrophysics, P.O. Box 500, MS 127, Batavia, IL 60510 (United States); Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Ragozzine, Darin; Charbonneau, David [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Lissauer, Jack J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Burke, Christopher J.; Caldwell, Douglas A. [NASA Ames Research Center, Moffett Field, CA 94035 (United States); Welsh, William F. [Astronomy Department, San Diego State University, San Diego, CA 92182-1221 (United States); Allen, Christopher [Orbital Sciences Corporation/NASA Ames Research Center, Moffett Field, CA 94035 (United States); Batalha, Natalie M. [Department of Physics and Astronomy, San Jose State University, San Jose, CA 95192 (United States); Buchhave, Lars A., E-mail: eford@astro.ufl.edu [Niels Bohr Institute, Copenhagen University, DK-2100 Copenhagen (Denmark); Collaboration: Kepler Science Team; and others

    2012-05-10

    We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies is in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the TTVs of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple-planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.

  15. Dynamic Resonance Sensitivity Analysis in Wind Farms

    DEFF Research Database (Denmark)

    Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei

    2017-01-01

    (PFs) are calculated by critical eigenvalue sensitivity analysis versus the entries of the MIMO matrix. The PF analysis locates the most exciting bus of the resonances, where can be the best location to install the passive or active filters to reduce the harmonic resonance problems. Time...

  16. Stochastic sensitivity analysis of periodic attractors in non-autonomous nonlinear dynamical systems based on stroboscopic map

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Kong-Ming, E-mail: kmguo@xidian.edu.cn [School of Electromechanical Engineering, Xidian University, P.O. Box 187, Xi' an 710071 (China); Jiang, Jun, E-mail: jun.jiang@mail.xjtu.edu.cn [State Key Laboratory for Strength and Vibration, Xi' an Jiaotong University, Xi' an 710049 (China)

    2014-07-04

    To apply stochastic sensitivity function method, which can estimate the probabilistic distribution of stochastic attractors, to non-autonomous dynamical systems, a 1/N-period stroboscopic map for a periodic motion is constructed in order to discretize the continuous cycle into a discrete one. In this way, the sensitivity analysis of a cycle for discrete map can be utilized and a numerical algorithm for the stochastic sensitivity analysis of periodic solutions of non-autonomous nonlinear dynamical systems under stochastic disturbances is devised. An external excited Duffing oscillator and a parametric excited laser system are studied as examples to show the validity of the proposed method. - Highlights: • A method to analyze sensitivity of stochastic periodic attractors in non-autonomous dynamical systems is proposed. • Probabilistic distribution around periodic attractors in an external excited Φ{sup 6} Duffing system is obtained. • Probabilistic distribution around a periodic attractor in a parametric excited laser system is determined.

  17. Principal component analysis of the CT density histogram to generate parametric response maps of COPD

    Science.gov (United States)

    Zha, N.; Capaldi, D. P. I.; Pike, D.; McCormack, D. G.; Cunningham, I. A.; Parraga, G.

    2015-03-01

    Pulmonary x-ray computed tomography (CT) may be used to characterize emphysema and airways disease in patients with chronic obstructive pulmonary disease (COPD). One analysis approach - parametric response mapping (PMR) utilizes registered inspiratory and expiratory CT image volumes and CT-density-histogram thresholds, but there is no consensus regarding the threshold values used, or their clinical meaning. Principal-component-analysis (PCA) of the CT density histogram can be exploited to quantify emphysema using data-driven CT-density-histogram thresholds. Thus, the objective of this proof-of-concept demonstration was to develop a PRM approach using PCA-derived thresholds in COPD patients and ex-smokers without airflow limitation. Methods: Fifteen COPD ex-smokers and 5 normal ex-smokers were evaluated. Thoracic CT images were also acquired at full inspiration and full expiration and these images were non-rigidly co-registered. PCA was performed for the CT density histograms, from which the components with the highest eigenvalues greater than one were summed. Since the values of the principal component curve correlate directly with the variability in the sample, the maximum and minimum points on the curve were used as threshold values for the PCA-adjusted PRM technique. Results: A significant correlation was determined between conventional and PCA-adjusted PRM with 3He MRI apparent diffusion coefficient (p<0.001), with CT RA950 (p<0.0001), as well as with 3He MRI ventilation defect percent, a measurement of both small airways disease (p=0.049 and p=0.06, respectively) and emphysema (p=0.02). Conclusions: PRM generated using PCA thresholds of the CT density histogram showed significant correlations with CT and 3He MRI measurements of emphysema, but not airways disease.

  18. Sensitivity functions for uncertainty analysis: Sensitivity and uncertainty analysis of reactor performance parameters

    International Nuclear Information System (INIS)

    Greenspan, E.

    1982-01-01

    This chapter presents the mathematical basis for sensitivity functions, discusses their physical meaning and information they contain, and clarifies a number of issues concerning their application, including the definition of group sensitivities, the selection of sensitivity functions to be included in the analysis, and limitations of sensitivity theory. Examines the theoretical foundation; criticality reset sensitivities; group sensitivities and uncertainties; selection of sensitivities included in the analysis; and other uses and limitations of sensitivity functions. Gives the theoretical formulation of sensitivity functions pertaining to ''as-built'' designs for performance parameters of the form of ratios of linear flux functionals (such as reaction-rate ratios), linear adjoint functionals, bilinear functions (such as reactivity worth ratios), and for reactor reactivity. Offers a consistent procedure for reducing energy-dependent or fine-group sensitivities and uncertainties to broad group sensitivities and uncertainties. Provides illustrations of sensitivity functions as well as references to available compilations of such functions and of total sensitivities. Indicates limitations of sensitivity theory originating from the fact that this theory is based on a first-order perturbation theory

  19. Probabilistic sensitivity analysis in health economics.

    Science.gov (United States)

    Baio, Gianluca; Dawid, A Philip

    2015-12-01

    Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. © The Author(s) 2011.

  20. PDASAC, Partial Differential Sensitivity Analysis of Stiff System

    International Nuclear Information System (INIS)

    Caracotsios, M.; Stewart, W.E.

    2001-01-01

    1 - Description of program or function: PDASAC solves stiff, nonlinear initial-boundary-value problems in a timelike dimension t and a space dimension x. Plane, circular cylindrical or spherical boundaries can be handled. Mixed-order systems of partial differential and algebraic equations can be analyzed with members of order or 0 or 1 in t, 0, 1 or 2 in x. Parametric sensitivities of the calculated states are computed simultaneously on request, via the Jacobian of the state equations. Initial and boundary conditions are efficiently reconciled. Local error control (in the max-norm or the 2-norm) is provided for the state vector and can include the parametric sensitivities if desired. 2 - Method of solution: The method of lines is used, with a user- selected x-grid and a minimum-bandwidth finite-difference approximations of the x-derivatives. Starting conditions are reconciled with a damped Newton algorithm adapted from Bain and Stewart (1991). Initial step selection is done by the first-order algorithms of Shampine (1987), extended here to differential- algebraic equation systems. The solution is continued with the DASSL predictor-corrector algorithm (Petzold 1983, Brenan et al. 1989) with the initial acceleration phase deleted and with row scaling of the Jacobian added. The predictor and corrector are expressed in divided-difference form, with the fixed-leading-coefficient form of corrector (Jackson and Sacks-Davis 1989; Brenan et al. 1989). Weights for the error tests are updated in each step with the user's tolerances at the predicted state. Sensitivity analysis is performed directly on the corrector equations of Caracotsios and Stewart (1985) and is extended here to the initialization when needed. 3 - Restrictions on the complexity of the problem: This algorithm, like DASSL, performs well on differential-algebraic equation systems of index 0 and 1 but not on higher-index systems; see Brenan et al. (1989). The user assigned the work array lengths and the output

  1. Bruxism and dental implant failures: a multilevel mixed effects parametric survival analysis approach.

    Science.gov (United States)

    Chrcanovic, B R; Kisch, J; Albrektsson, T; Wennerberg, A

    2016-11-01

    Recent studies have suggested that the insertion of dental implants in patients being diagnosed with bruxism negatively affected the implant failure rates. The aim of the present study was to investigate the association between the bruxism and the risk of dental implant failure. This retrospective study is based on 2670 patients who received 10 096 implants at one specialist clinic. Implant- and patient-related data were collected. Descriptive statistics were used to describe the patients and implants. Multilevel mixed effects parametric survival analysis was used to test the association between bruxism and risk of implant failure adjusting for several potential confounders. Criteria from a recent international consensus (Lobbezoo et al., J Oral Rehabil, 40, 2013, 2) and from the International Classification of Sleep Disorders (International classification of sleep disorders, revised: diagnostic and coding manual, American Academy of Sleep Medicine, Chicago, 2014) were used to define and diagnose the condition. The number of implants with information available for all variables totalled 3549, placed in 994 patients, with 179 implants reported as failures. The implant failure rates were 13·0% (24/185) for bruxers and 4·6% (155/3364) for non-bruxers (P bruxism was a statistically significantly risk factor to implant failure (HR 3·396; 95% CI 1·314, 8·777; P = 0·012), as well as implant length, implant diameter, implant surface, bone quantity D in relation to quantity A, bone quality 4 in relation to quality 1 (Lekholm and Zarb classification), smoking and the intake of proton pump inhibitors. It is suggested that the bruxism may be associated with an increased risk of dental implant failure. © 2016 John Wiley & Sons Ltd.

  2. Thermo-mechanical parametric analysis of packed-bed thermocline energy storage tanks

    International Nuclear Information System (INIS)

    González, Ignacio; Pérez-Segarra, Carlos David; Lehmkuhl, Oriol; Torras, Santiago; Oliva, Assensi

    2016-01-01

    Highlights: • A numerical model of packed-bed thermocline thermal storage for CSP is presented. • Up-to-date commercial configurations are tested both thermally and structurally. • Promising thermal performance is obtained with a temperature difference of 100 °C. • Reliable factors of safety against material yielding and ratcheting can be obtained. • Cyclic relaxation-traction elastic wall stresses arise with plant normal operation. - Abstract: A packed-bed thermocline tank represents a proved cheaper thermal energy storage for concentrated solar power plants compared with the commonly-built two-tank system. However, its implementation has been stopped mainly due to the vessel’s thermal ratcheting concern, which would compromise its structural integrity. In order to have a better understanding of the commercial viability of thermocline approach, regarding energetic effectiveness and structural reliability, a new numerical simulation platform has been developed. The model dynamically solves and couples all the significant components of the subsystem, being able to evaluate its thermal and mechanical response over plant normal operation. The filler material is considered as a cohesionless bulk solid with thermal expansion. For the stresses on the tank wall the general thermoelastic theory is used. First, the numerical model is validated with the Solar One thermocline case, and then a parametric analysis is carried out by settling this storage technology in two real plants with a temperature rise of 100 °C and 275 °C. The numerical results show a better storage performance together with the lowest temperature difference, but both options achieve suitable structural factors of safety with a proper design.

  3. Non-Parametric Kinetic (NPK Analysis of Thermal Oxidation of Carbon Aerogels

    Directory of Open Access Journals (Sweden)

    Azadeh Seifi

    2017-05-01

    Full Text Available In recent years, much attention has been paid to aerogel materials (especially carbon aerogels due to their potential uses in energy-related applications, such as thermal energy storage and thermal protection systems. These open cell carbon-based porous materials (carbon aerogels can strongly react with oxygen at relatively low temperatures (~ 400°C. Therefore, it is necessary to evaluate the thermal performance of carbon aerogels in view of their energy-related applications at high temperatures and under thermal oxidation conditions. The objective of this paper is to study theoretically and experimentally the oxidation reaction kinetics of carbon aerogel using the non-parametric kinetic (NPK as a powerful method. For this purpose, a non-isothermal thermogravimetric analysis, at three different heating rates, was performed on three samples each with its specific pore structure, density and specific surface area. The most significant feature of this method, in comparison with the model-free isoconversional methods, is its ability to separate the functionality of the reaction rate with the degree of conversion and temperature by the direct use of thermogravimetric data. Using this method, it was observed that the Nomen-Sempere model could provide the best fit to the data, while the temperature dependence of the rate constant was best explained by a Vogel-Fulcher relationship, where the reference temperature was the onset temperature of oxidation. Moreover, it was found from the results of this work that the assumption of the Arrhenius relation for the temperature dependence of the rate constant led to over-estimation of the apparent activation energy (up to 160 kJ/mol that was considerably different from the values (up to 3.5 kJ/mol predicted by the Vogel-Fulcher relationship in isoconversional methods

  4. Parametric analysis of a dual loop Organic Rankine Cycle (ORC) system for engine waste heat recovery

    International Nuclear Information System (INIS)

    Song, Jian; Gu, Chun-wei

    2015-01-01

    Highlights: • A dual loop ORC system is designed for engine waste heat recovery. • The two loops are coupled via a shared heat exchanger. • The influence of the HT loop condensation parameters on the LT loop is evaluated. • Pinch point locations determine the thermal parameters of the LT loop. - Abstract: This paper presents a dual loop Organic Rankine Cycle (ORC) system consisting of a high temperature (HT) loop and a low temperature (LT) loop for engine waste heat recovery. The HT loop recovers the waste heat of the engine exhaust gas, and the LT loop recovers that of the jacket cooling water in addition to the residual heat of the HT loop. The two loops are coupled via a shared heat exchanger, which means that the condenser of the HT loop is the evaporator of the LT loop as well. Cyclohexane, benzene and toluene are selected as the working fluids of the HT loop. Different condensation temperatures of the HT loop are set to maintain the condensation pressure slightly higher than the atmosphere pressure. R123, R236fa and R245fa are chosen for the LT loop. Parametric analysis is conducted to evaluate the influence of the HT loop condensation temperature and the residual heat load on the LT loop. The simulation results reveal that under different condensation conditions of the HT loop, the pinch point of the LT loop appears at different locations, resulting in different evaporation temperatures and other thermal parameters. With cyclohexane for the HT loop and R245fa for the LT loop, the maximum net power output of the dual loop ORC system reaches 111.2 kW. Since the original power output of the engine is 996 kW, the additional power generated by the dual loop ORC system can increase the engine power by 11.2%.

  5. Parametric study and performance analysis of hybrid rocket motors with double-tube configuration

    Science.gov (United States)

    Yu, Nanjia; Zhao, Bo; Lorente, Arnau Pons; Wang, Jue

    2017-03-01

    The practical implementation of hybrid rocket motors has historically been hampered by the slow regression rate of the solid fuel. In recent years, the research on advanced injector designs has achieved notable results in the enhancement of the regression rate and combustion efficiency of hybrid rockets. Following this path, this work studies a new configuration called double-tube characterized by injecting the gaseous oxidizer through a head end injector and an inner tube with injector holes distributed along the motor longitudinal axis. This design has demonstrated a significant potential for improving the performance of hybrid rockets by means of a better mixing of the species achieved through a customized injection of the oxidizer. Indeed, the CFD analysis of the double-tube configuration has revealed that this design may increase the regression rate over 50% with respect to the same motor with a conventional axial showerhead injector. However, in order to fully exploit the advantages of the double-tube concept, it is necessary to acquire a deeper understanding of the influence of the different design parameters in the overall performance. In this way, a parametric study is carried out taking into account the variation of the oxidizer mass flux rate, the ratio of oxidizer mass flow rate injected through the inner tube to the total oxidizer mass flow rate, and injection angle. The data for the analysis have been gathered from a large series of three-dimensional numerical simulations that considered the changes in the design parameters. The propellant combination adopted consists of gaseous oxygen as oxidizer and high-density polyethylene as solid fuel. Furthermore, the numerical model comprises Navier-Stokes equations, k-ε turbulence model, eddy-dissipation combustion model and solid-fuel pyrolysis, which is computed through user-defined functions. This numerical model was previously validated by analyzing the computational and experimental results obtained for

  6. DDASAC, Double-Precision Differential or Algebraic Sensitivity Analysis

    International Nuclear Information System (INIS)

    Caracotsios, M.; Stewart, W.E.; Petzold, L.

    1997-01-01

    1 - Description of program or function: DDASAC solves nonlinear initial-value problems involving stiff implicit systems of ordinary differential and algebraic equations. Purely algebraic nonlinear systems can also be solved, given an initial guess within the region of attraction of a solution. Options include automatic reconciliation of inconsistent initial states and derivatives, automatic initial step selection, direct concurrent parametric sensitivity analysis, and stopping at a prescribed value of any user-defined functional of the current solution vector. Local error control (in the max-norm or the 2-norm) is provided for the state vector and can include the sensitivities on request. 2 - Method of solution: Reconciliation of initial conditions is done with a damped Newton algorithm adapted from Bain and Stewart (1991). Initial step selection is done by the first-order algorithm of Shampine (1987), extended here to differential-algebraic equation systems. The solution is continued with the DASSL predictor- corrector algorithm (Petzold 1983, Brenan et al. 1989) with the initial acceleration phase detected and with row scaling of the Jacobian added. The backward-difference formulas for the predictor and corrector are expressed in divide-difference form, and the fixed-leading-coefficient form of the corrector (Jackson and Sacks-Davis 1980, Brenan et al. 1989) is used. Weights for error tests are updated in each step with the user's tolerances at the predicted state. Sensitivity analysis is performed directly on the corrector equations as given by Catacotsios and Stewart (1985) and is extended here to the initialization when needed. 3 - Restrictions on the complexity of the problem: This algorithm, like DASSL, performs well on differential-algebraic systems of index 0 and 1 but not on higher-index systems; see Brenan et al. (1989). The user assigns the work array lengths and the output unit. The machine number range and precision are determined at run time by a

  7. Quantum and Raman Noise in a Depleted Fiber Optical Parametric Amplifier

    DEFF Research Database (Denmark)

    Friis, Søren Michael Mørk; Rottwitt, Karsten; McKinstrie, Colin J.

    2013-01-01

    The noise properties of both phase-sensitive and phase-insensitive saturated parametric amplifiers are studied using a semi-classical approach. Vacuum fluctuations as well as spontaneous Raman scattering are included in the analysis....

  8. Transfer function for a superficial layer. Parametric analysis and relationship with SM records

    International Nuclear Information System (INIS)

    Sandi, H.; Stancu, O.

    2002-01-01

    The developments presented were aimed at providing an analytical and computational support for a research project intended to examine the contribution of source mechanism and of local conditions to the features of ground motion due to Vrancea earthquakes. The project referred to is being developed jointly, by the Academy of Technical Sciences of Romania, the Institute of Geodynamics, the Technical University of Civil Engineering, Bucharest, and GEOTEC, Bucharest. The modelling of the phenomenon of seismic oscillations of ground was based on assumptions of physical and geometrical linearity. The dynamic systems considered were assumed to consist of a sequence of plane = parallel homogeneous geologic layers, accepting that the relevant physical characteristics (thickness, density, low frequency S-wave velocity, rheological characteristic) are constant for a layer, but may change from one layer to another). Alternative constitutive laws were considered (the laws referred to were of Kelvin - Voigt, Poynting and Sorokin types). The transfer function of a geological package is determined as a product of transfer functions of the successive homogeneous layers. A first step of analysis corresponded to the consideration of a single homogeneous layer, for which full analytical solutions could be derived. A parametric analysis, aimed at determining the transfer function, was undertaken considering alternative (credible) values for the parameters characterizing the constitutive laws referred to. Considering alternative possible situations, it turned out that a strong amplification occurs (for any type of constitutive law) especially for the fundamental mode of the dynamic system, while the amplification is weaker for the upper normal modes. These results correlate well with the outcome of analysis of the spectral content of ground motion as obtained from the processing of strong motion records. The most striking fact is represented by the important modifications of the

  9. Parametric statistical techniques for the comparative analysis of censored reliability data: a review

    International Nuclear Information System (INIS)

    Bohoris, George A.

    1995-01-01

    This paper summarizes part of the work carried out to date on seeking analytical solutions to the two-sample problem with censored data in the context of reliability and maintenance optimization applications. For this purpose, parametric two-sample tests for failure and censored reliability data are introduced and their applicability/effectiveness in common engineering problems is reviewed

  10. Sensitivity Analysis of Centralized Dynamic Cell Selection

    DEFF Research Database (Denmark)

    Lopez, Victor Fernandez; Alvarez, Beatriz Soret; Pedersen, Klaus I.

    2016-01-01

    and a suboptimal optimization algorithm that nearly achieves the performance of the optimal Hungarian assignment. Moreover, an exhaustive sensitivity analysis with different network and traffic configurations is carried out in order to understand what conditions are more appropriate for the use of the proposed...

  11. Sensitivity analysis in a structural reliability context

    International Nuclear Information System (INIS)

    Lemaitre, Paul

    2014-01-01

    This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored. The proposed methods are then applied on the CWNR case, which motivates this thesis. (author)

  12. Applications of advances in nonlinear sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Werbos, P J

    1982-01-01

    The following paper summarizes the major properties and applications of a collection of algorithms involving differentiation and optimization at minimum cost. The areas of application include the sensitivity analysis of models, new work in statistical or econometric estimation, optimization, artificial intelligence and neuron modelling.

  13. *Corresponding Author Sensitivity Analysis of a Physiochemical ...

    African Journals Online (AJOL)

    Michael Horsfall

    The numerical method of sensitivity or the principle of parsimony ... analysis is a widely applied numerical method often being used in the .... Chemical Engineering Journal 128(2-3), 85-93. Amod S ... coupled 3-PG and soil organic matter.

  14. Parametric performance analysis of a concentrated photovoltaic co-generation system equipped with a thermal storage tank

    International Nuclear Information System (INIS)

    Imtiaz Hussain, M.; Lee, Gwi Hyun

    2015-01-01

    Highlights: • Both thermal and electrical powers varied by changing surface area of collector. • Thermal stratification and total system power were increased at critical flow rate. • Parametric analysis of the CPVC system offers to determine the desired outcome. • Thermal and electrical outputs varied by changing the focal length of Fresnel lens. - Abstract: This article presents a parametric study of a concentrated photovoltaic co-generation (CPVC) system with an attached thermal storage tank. The CPVC system utilized dual-axis tracker and multiple solar energy collector (SEC) modules and forced cooling system. Each SEC module comprised 16 triple-junction solar cells, copper tube absorbers, and 16 Fresnel lenses were aligned against each solar cell. This study investigated all possible parameters that can affect the CPVC system performance, including the collector area, solar irradiation, inlet temperature, and mass flow rate. The surface area of the collector and the thermal power were increased by increasing the number of SEC modules connected in series; however, the electrical power output decreased from the first to the fourth SEC module consecutively. At the measured optimal flow rate, mixing and thermal diffusion in the storage tank were decreased, and the total power generation from the CPVC system was increased. Variations in the thermal and electrical power outputs were also observed when the focal length of the Fresnel lens was changed. This parametric analysis enables the CPVC system to obtain the desired output by varying the combination of operational and geometrical parameters

  15. Assessment of left ventricular contraction by parametric analysis of main motion (PAMM): theory and application for echocardiography

    International Nuclear Information System (INIS)

    Dominguez, C Ruiz; Kachenoura, N; Cesare, A De; Delouche, A; Lim, P; Gerard, O; Herment, A; Diebold, B; Frouin, F

    2005-01-01

    The computerized study of the regional contraction of the left ventricle has undergone numerous developments, particularly in relation to echocardiography. A new method, parametric analysis of main motion (PAMM), is proposed in order to synthesize the information contained in a cine loop of images in parametric images. PAMM determines, for the intensity variation time curves (IVTC) observed in each pixel, two amplitude coefficients characterizing the continuous component and the alternating component; the variable component is generated from a mother curve by introducing a time shift coefficient and a scale coefficient. Two approaches, a PAMM data driven and a PAMM model driven (simpler and faster), are proposed. On the basis of the four coefficients, an amplitude image and an image of mean contraction time are synthesized and interpreted by a cardiologist. In all cases, both PAMM methods allow better IVTC adjustment than the other methods of parametric imaging used in echocardiography. A preliminary database comprising 70 segments is scored and compared with the visual analysis, taken from a consensus of two expert interpreters. The levels of absolute and relative concordance are 79% and 97%. PAMM model driven is a promising method for the rapid detection of abnormalities in left ventricle contraction

  16. A non-parametric Data Envelopment Analysis approach for improving energy efficiency of grape production

    International Nuclear Information System (INIS)

    Khoshroo, Alireza; Mulwa, Richard; Emrouznejad, Ali; Arabi, Behrouz

    2013-01-01

    Grape is one of the world's largest fruit crops with approximately 67.5 million tonnes produced each year and energy is an important element in modern grape productions as it heavily depends on fossil and other energy resources. Efficient use of these energies is a necessary step toward reducing environmental hazards, preventing destruction of natural resources and ensuring agricultural sustainability. Hence, identifying excessive use of energy as well as reducing energy resources is the main focus of this paper to optimize energy consumption in grape production. In this study we use a two-stage methodology to find the association of energy efficiency and performance explained by farmers' specific characteristics. In the first stage a non-parametric Data Envelopment Analysis is used to model efficiencies as an explicit function of human labor, machinery, chemicals, FYM (farmyard manure), diesel fuel, electricity and water for irrigation energies. In the second step, farm specific variables such as farmers' age, gender, level of education and agricultural experience are used in a Tobit regression framework to explain how these factors influence efficiency of grape farming. The result of the first stage shows substantial inefficiency between the grape producers in the studied area while the second stage shows that the main difference between efficient and inefficient farmers was in the use of chemicals, diesel fuel and water for irrigation. The use of chemicals such as insecticides, herbicides and fungicides were considerably less than inefficient ones. The results revealed that the more educated farmers are more energy efficient in comparison with their less educated counterparts. - Highlights: • The focus of this paper is to identify excessive use of energy and optimize energy consumption in grape production. • We measure the efficiency as a function of labor/machinery/chemicals/farmyard manure/diesel-fuel/electricity/water. • Data were obtained from 41 grape

  17. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  18. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  19. Sensitivity analysis and related analysis : A survey of statistical techniques

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical

  20. Parametric sensitivity of two axis models for turbo-generators; Sensibilidad parametrica de modelos de dos ejes para turbogeneradores

    Energy Technology Data Exchange (ETDEWEB)

    Morales Castorena, Armando

    2003-06-15

    The results of parameter sensitivity studies performed on two axis equivalent circuits (TAECs) of synchronous machines are presented in this thesis. The circuits consist of inductive and resistive elements. Their connectivity represents the magnetic and electric coupling inside the machine as well as its energy losses. Two equivalent circuits are needed to represent the machine, one for the direct axis (d) and another for the quadrature axis (q), because it is modeled under the two-axis reaction theory of Park. Parameter values have been identified in advance, using standstill frequency response tests (SSFR). This response was calculated using a finite element model of a turbine generator. The parameter identification is achieved by applying an optimization process based on a hybrid algorithm (stochastic-deterministic). The fitness function is defined as the square of the differences between magnitudes and phase angles of the frequency response functions of the TAECs and of the turbogenerator. This procedure yields the TAECs that better fit the frequency response of the machine. Thus, the circuits identified are considered good models of the machine and they can be applied for digital simulation of dynamic behavior. The identified TAECs are the basis of the parameter sensitivity studies reported here. These studies consist of doing very small variations to parameter values, and to calculate the new value of the fitness function. The ratio between the change of the fitness function to the change in parameter value is called sensitivity of the fitness function, or simply, sensitivity function. Its magnitude indicates which parameter has a greater or lesser influence on the fitness function. If the fitness function is very sensitive to a particular parameter, then the rightness of the identified value of that parameter may be in doubt. With this information it is possible to establish the reliability of the identification process and to exert corrective actions. It is

  1. A method of statistical analysis in the field of sports science when assumptions of parametric tests are not violated

    Directory of Open Access Journals (Sweden)

    Elżbieta Sandurska

    2016-12-01

    Full Text Available Introduction: Application of statistical software typically does not require extensive statistical knowledge, allowing to easily perform even complex analyses. Consequently, test selection criteria and important assumptions may be easily overlooked or given insufficient consideration. In such cases, the results may likely lead to wrong conclusions. Aim: To discuss issues related to assumption violations in the case of Student's t-test and one-way ANOVA, two parametric tests frequently used in the field of sports science, and to recommend solutions. Description of the state of knowledge: Student's t-test and ANOVA are parametric tests, and therefore some of the assumptions that need to be satisfied include normal distribution of the data and homogeneity of variances in groups. If the assumptions are violated, the original design of the test is impaired, and the test may then be compromised giving spurious results. A simple method to normalize the data and to stabilize the variance is to use transformations. If such approach fails, a good alternative to consider is a nonparametric test, such as Mann-Whitney, the Kruskal-Wallis or Wilcoxon signed-rank tests. Summary: Thorough verification of the parametric tests assumptions allows for correct selection of statistical tools, which is the basis of well-grounded statistical analysis. With a few simple rules, testing patterns in the data characteristic for the study of sports science comes down to a straightforward procedure.

  2. Brain SPECT in mesial temporal lobe epilepsy: comparison between visual analysis and SPM (Statistical Parametric Mapping)

    Energy Technology Data Exchange (ETDEWEB)

    Amorim, Barbara Juarez; Ramos, Celso Dario; Santos, Allan Oliveira dos; Lima, Mariana da Cunha Lopes de; Camargo, Edwaldo Eduardo; Etchebehere, Elba Cristina Sa de Camargo, E-mail: juarezbarbara@hotmail.co [State University of Campinas (UNICAMP), SP (Brazil). School of Medical Sciences. Dept. of Radiology; Min, Li Li; Cendes, Fernando [State University of Campinas (UNICAMP), SP (Brazil). School of Medical Sciences. Dept. of Neurology

    2010-04-15

    Objective: to compare the accuracy of SPM and visual analysis of brain SPECT in patients with mesial temporal lobe epilepsy (MTLE). Method: interictal and ictal SPECTs of 22 patients with MTLE were performed. Visual analysis were performed in interictal (VISUAL(inter)) and ictal (VISUAL(ictal/inter)) studies. SPM analysis consisted of comparing interictal (SPM(inter)) and ictal SPECTs (SPM(ictal)) of each patient to control group and by comparing perfusion of temporal lobes in ictal and interictal studies among themselves (SPM(ictal/inter)). Results: for detection of the epileptogenic focus, the sensitivities were as follows: VISUAL(inter)=68%; VISUAL(ictal/inter)=100%; SPM(inter)=45%; SPM(ictal)=64% and SPM(ictal/inter)=77%. SPM was able to detect more areas of hyperperfusion and hypoperfusion. Conclusion: SPM did not improve the sensitivity to detect epileptogenic focus. However, SPM detected different regions of hypoperfusion and hyperperfusion and is therefore a helpful tool for better understand pathophysiology of seizures in MTLE. (author)

  3. Brain SPECT in mesial temporal lobe epilepsy: comparison between visual analysis and SPM (Statistical Parametric Mapping)

    International Nuclear Information System (INIS)

    Amorim, Barbara Juarez; Ramos, Celso Dario; Santos, Allan Oliveira dos; Lima, Mariana da Cunha Lopes de; Camargo, Edwaldo Eduardo; Etchebehere, Elba Cristina Sa de Camargo; Min, Li Li; Cendes, Fernando

    2010-01-01

    Objective: to compare the accuracy of SPM and visual analysis of brain SPECT in patients with mesial temporal lobe epilepsy (MTLE). Method: interictal and ictal SPECTs of 22 patients with MTLE were performed. Visual analysis were performed in interictal (VISUAL(inter)) and ictal (VISUAL(ictal/inter)) studies. SPM analysis consisted of comparing interictal (SPM(inter)) and ictal SPECTs (SPM(ictal)) of each patient to control group and by comparing perfusion of temporal lobes in ictal and interictal studies among themselves (SPM(ictal/inter)). Results: for detection of the epileptogenic focus, the sensitivities were as follows: VISUAL(inter)=68%; VISUAL(ictal/inter)=100%; SPM(inter)=45%; SPM(ictal)=64% and SPM(ictal/inter)=77%. SPM was able to detect more areas of hyperperfusion and hypoperfusion. Conclusion: SPM did not improve the sensitivity to detect epileptogenic focus. However, SPM detected different regions of hypoperfusion and hyperperfusion and is therefore a helpful tool for better understand pathophysiology of seizures in MTLE. (author)

  4. Analysis and application of two recursive parametric estimation algorithms for an asynchronous machine

    International Nuclear Information System (INIS)

    Damek, Nawel; Kamoun, Samira

    2011-01-01

    In this communication, two recursive parametric estimation algorithms are analyzed and applied to an squirrelcage asynchronous machine located at the research ''Unit of Automatic Control'' (UCA) at ENIS. The first algorithm which, use the transfer matrix mathematical model, is based on the gradient principle. The second algorithm, which use the state-space mathematical model, is based on the minimization of the estimation error. These algorithms are applied as a key technique to estimate asynchronous machine with unknown, but constant or timevarying parameters. Stator voltage and current are used as measured data. The proposed recursive parametric estimation algorithms are validated on the experimental data of an asynchronous machine under normal operating condition as full load. The results show that these algorithms can estimate effectively the machine parameters with reliability.

  5. Analytical Analysis on Nonlinear Parametric Vibration of an Axially Moving String with Fractional Viscoelastic Damping

    Directory of Open Access Journals (Sweden)

    Ying Li

    2017-01-01

    Full Text Available The nonlinear parametric vibration of an axially moving string made by rubber-like materials is studied in the paper. The fractional viscoelastic model is used to describe the damping of the string. Then, a new nonlinear fractional mathematical model governing transverse motion of the string is derived based on Newton’s second law, the Euler beam theory, and the Lagrangian strain. Taking into consideration the fractional calculus law of Riemann-Liouville form, the principal parametric resonance is analytically investigated via applying the direct multiscale method. Numerical results are presented to show the influences of the fractional order, the stiffness constant, the viscosity coefficient, and the axial-speed fluctuation amplitude on steady-state responses. It is noticeable that the amplitudes and existing intervals of steady-state responses predicted by Kirchhoff’s fractional material model are much larger than those predicted by Mote’s fractional material model.

  6. ANALYSIS OF FUZZY QUEUES: PARAMETRIC PROGRAMMING APPROACH BASED ON RANDOMNESS - FUZZINESS CONSISTENCY PRINCIPLE

    OpenAIRE

    Dhruba Das; Hemanta K. Baruah

    2015-01-01

    In this article, based on Zadeh’s extension principle we have apply the parametric programming approach to construct the membership functions of the performance measures when the interarrival time and the service time are fuzzy numbers based on the Baruah’s Randomness- Fuzziness Consistency Principle. The Randomness-Fuzziness Consistency Principle leads to defining a normal law of fuzziness using two different laws of randomness. In this article, two fuzzy queues FM...

  7. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  8. Global sensitivity analysis by polynomial dimensional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2011-07-15

    This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.

  9. Parametric trends analysis of the critical heat flux based on artificial neural networks

    International Nuclear Information System (INIS)

    Moon, S.K.; Baek, W.P.; Chang, S.H.

    1996-01-01

    Parametric trends of the critical heat flux (CHF) are analyzed by applying artificial neural networks (ANNs) to a CHF data base for upward flow of water in uniformly heated vertical round tubes. The analyses are performed from three viewpoints, i.e., for fixed inlet conditions, for fixed exit conditions, and based on local conditions hypothesis. Katto's and Groeneveld et al. dimensionless parameters are used to train the ANNs with the experimental CHF data. The trained ANNs predict the CHF better than any other conventional correlations, showing RMS errors of 8.9%, 13.1% and 19.3% for fixed inlet conditions, for fixed exit conditions, and for local conditions hypothesis, respectively. The parametric trends of the CHF obtained from those trained ANNs show a general agreement with previous understanding. In addition, this study provides more comprehensive information and indicates interesting points for the effects of the tube diameter, the heated length, and the mass flux. It is expected that better understanding of the parametric trends is feasible with an extended data base. (orig.)

  10. Cerebral blood flow and related factors in hyperthyroidism patients by SPECT imaging and statistical parametric mapping analysis

    International Nuclear Information System (INIS)

    Xiu Yan; Shi Hongcheng; Liu Wenguan; Chen Xuefen; Gu Yushen; Chen Shuguang; Yu Haojun; Yu Yiping

    2010-01-01

    Objective: To investigate the cerebral blood flow (CBF) perfusion patterns and related factors in hyperthyroidism patients. Methods: Twenty-five patients with hyperthyroidism and twenty-two healthy controls matched for age, sex, education were enrolled. 99 Tc m -ethylene cysteinate dimer (ECD) SPECT CBF perfusion imaging was performed at rest. Statistical parametric mapping 5.0 software (SPM5) was used and a statistical threshold of P 3 , FT 4 ), thyroid autoimmune antibodies: sensitive thyroid stimulating hormone (sTSH), thyroid peroxidase antibody (TPOAb) and TSH receptor antibody (TRAb) by Pearson analysis, with disease duration by Spearman analysis. Results: rCBF was decreased significantly in limbic system and frontal lobe, including parahippocampal gyrus, uncus (posterior entorhinal cortex, posterior parolfactory cortex, parahippocampal cortex, anterior cingulate, right inferior temporal gyrus), left hypothalamus and caudate nucleus (P 3 (r=-0.468, -0.417, both P 4 (r=-0.4M, -0.418, -0.415, -0.459, all P 4 (r=0.419, 0.412, both P<0.05). rCBF in left insula was negatively correlated with concentration of sTSH, and right auditory associated cortex was positively correlated with concentration of sTSH (r=-0.504, 0.429, both P<0.05). rCBF in left middle temporal gyrus, left angular gyrus was positively correlated with concentration of TRAb while that in right thalamus, right hypothalamus, left anterior nucleus,left ventralis nucleus was negatively correlated with concentration of TRAb (r=0.750, 0.862, -0.691, -0.835, -0.713, -0.759, all P<0.05). rCBF in right anterior cingulate, right cuneus, right rectus gyrus, right superior marginal gyrus was positively correlated with concentration of TPOAb (r=0.696, 0.581, 0.779, 0.683, all P<0.05). rCBF in postcentral gyrus, temporal gyrus, left superior marginal gyrus and auditory associated cortex was positively correlated with disease duration (r=0.502, 0.457, 0.524, 0.440, all P<0.05). Conclusion: Hypoperfusions in

  11. Parametric Analysis of a Hover Test Vehicle using Advanced Test Generation and Data Analysis

    Science.gov (United States)

    Gundy-Burlet, Karen; Schumann, Johann; Menzies, Tim; Barrett, Tony

    2009-01-01

    Large complex aerospace systems are generally validated in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. This is due to the large parameter space, and complex, highly coupled nonlinear nature of the different systems that contribute to the performance of the aerospace system. We have addressed the factors deterring such an analysis by applying a combination of technologies to the area of flight envelop assessment. We utilize n-factor (2,3) combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. The data generated is automatically analyzed through a combination of unsupervised learning using a Bayesian multivariate clustering technique (AutoBayes) and supervised learning of critical parameter ranges using the machine-learning tool TAR3, a treatment learner. Covariance analysis with scatter plots and likelihood contours are used to visualize correlations between simulation parameters and simulation results, a task that requires tool support, especially for large and complex models. We present results of simulation experiments for a cold-gas-powered hover test vehicle.

  12. Demonstration sensitivity analysis for RADTRAN III

    International Nuclear Information System (INIS)

    Neuhauser, K.S.; Reardon, P.C.

    1986-10-01

    A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves

  13. Systemization of burnup sensitivity analysis code. 2

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2005-02-01

    Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of criticality experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons; the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For

  14. Sensitivity analysis of an environmental model: an application of different analysis methods

    International Nuclear Information System (INIS)

    Campolongo, Francesca; Saltelli, Andrea

    1997-01-01

    A parametric sensitivity analysis (SA) was conducted on a well known model for the production of a key sulphur bearing compound from algal biota. The model is of interest because of the climatic relevance of the gas (dimethylsulphide, DMS), an initiator of cloud particles. A screening test at low sample size is applied first (Morris method) followed by a computationally intensive variance based measure. Standardised regression coefficients are also computed. The various SA measures are compared with each other, and the use of bootstrap is suggested to extract empirical confidence bounds on the SA estimators. For some of the input factors, investigators guess about the parameters relevance was confirmed; for some others, the results shed new light on the system mechanism and on the data parametrisation

  15. Systemization of burnup sensitivity analysis code

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2004-02-01

    To practical use of fact reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoints of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor core 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, development of a analysis code for burnup sensitivity, SAGEP-BURN, has been done and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to user due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functionalities in the existing large system. It is not sufficient to unify each computational component for some reasons; computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For this

  16. Parametric optimization and range analysis of Organic Rankine Cycle for binary-cycle geothermal plant

    International Nuclear Information System (INIS)

    Wang, Xing; Liu, Xiaomin; Zhang, Chuhua

    2014-01-01

    Highlights: • Optimal level constitution of parameters for ORC system was obtained. • Order of system parameters’ sensitivity to the performance of ORC was revealed. • Evaporating temperature had significant effect on performance of ORC system. • Superheater had little effect on performance of ORC system. - Abstract: In this study, a thermodynamic model of Organic Rankine Cycle (ORC) system combined with orthogonal design is proposed. The comprehensive scoring method was adopted to obtain a comprehensive index to evaluate both of the thermodynamic performance and economic performance. The optimal level constitution of system parameters which improves the thermodynamic and economic performance of ORC system is provided by analyzing the result of orthogonal design. The range analysis based on orthogonal design is adopted to determine the sensitivity of system parameters to the net power output of ORC system, thermal efficiency, the SP factor of radial inflow turbine, the power decrease factor of the pump and the total heat transfer capacity. The results show that the optimal level constitution of system parameters is determined as the working fluid of R245fa, the super heating temperature of 10 °C, the pinch temperature difference in evaporator and condenser of 5 °C, the evaporating temperature of 65 °C, the isentropic efficiency for the pump of 0.75 and the isentropic efficiency of radial inflow turbine of 0.85. The order of system parameters’ sensitivity to the comprehensive index of orthogonal design is evaporating temperature > isentropic efficiency of radial inflow turbine > the working fluid > the pinch temperature difference of the evaporator and the condenser > isentropic efficiency of cycle pump > the super heating temperature. This study provides useful references for selecting main controlled parameters in the optimal design of ORC system

  17. Mathematical model of combined parametrical analysis of indicator process and thermal loading on the Diesel engine piston

    Directory of Open Access Journals (Sweden)

    G. Lebedeva

    2004-06-01

    Full Text Available In the publication the methodical aspects of a mathematical model of the combined parametrical analysis of an indicator process and thermal loading on the diesel engine piston have been considered. A thermodynamic model of a diesel engine cycle is developed. The executed development is intended for use during researches and on the initial stages of design work. Its realization for high revolution diesel engines of perspective type CHN15/15 allowed to choose rational variants for the organization of an indicator process and to prove power ranges of application for not cooled and created cooled oil welded pistons.

  18. Cyberphysical systems for epilepsy and related brain disorders multi-parametric monitoring and analysis for diagnosis and optimal disease management

    CERN Document Server

    Antonopoulos, Christos

    2015-01-01

    This book introduces a new cyberphysical system that combines clinical and basic neuroscience research with advanced data analysis and medical management tools for developing novel applications for the management of epilepsy. The authors describe the algorithms and architectures needed to provide ambulatory, diagnostic and long-term monitoring services, through multi parametric data collection. Readers will see how to achieve in-hospital quality standards, addressing conventional “routine” clinic-based service purposes, at reduced cost, enhanced capability, and increased geographical availability. The cyberphysical system described in this book is flexible, can be optimized for each patient, and is demonstrated in several case studies.

  19. Sensitivity analysis of critical experiment with direct perturbation compared to TSUNAMI-3D sensitivity analysis

    International Nuclear Information System (INIS)

    Barber, A. D.; Busch, R.

    2009-01-01

    The goal of this work is to obtain sensitivities from direct uncertainty analysis calculation and correlate those calculated values with the sensitivities produced from TSUNAMI-3D (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation in Three Dimensions). A full sensitivity analysis is performed on a critical experiment to determine the overall uncertainty of the experiment. Small perturbation calculations are performed for all known uncertainties to obtain the total uncertainty of the experiment. The results from a critical experiment are only known as well as the geometric and material properties. The goal of this relationship is to simplify the uncertainty quantification process in assessing a critical experiment, while still considering all of the important parameters. (authors)

  20. Parametric Design Optimization Of A Novel Permanent Magnet Coupling Using Finite Element Analysis

    DEFF Research Database (Denmark)

    Högberg, Stig; Mijatovic, Nenad; Holbøll, Joachim

    2014-01-01

    A parametric design optimization routine has been applied to a novel magnetic coupling with improved recyclability. Coupling designs are modeled in a 3-D finite element environ- ment, and evaluated by three design objectives: pull-out torque, torque density by magnet mass, and torque density...... by total mass. Magnet and outer core thicknesses are varied discretely, whereas outer dimensions and air-gap length are kept constant. Comparative trends as a function of pole number and dimensions are depicted. A compromise exist between the design objectives, in which favoring one might reduce the other...

  1. General analysis of group velocity effects in collinear optical parametric amplifiers and generators.

    Science.gov (United States)

    Arisholm, Gunnar

    2007-05-14

    Group velocity mismatch (GVM) is a major concern in the design of optical parametric amplifiers (OPAs) and generators (OPGs) for pulses shorter than a few picoseconds. By simplifying the coupled propagation equations and exploiting their scaling properties, the number of free parameters for a collinear OPA is reduced to a level where the parameter space can be studied systematically by simulations. The resulting set of figures show the combinations of material parameters and pulse lengths for which high performance can be achieved, and they can serve as a basis for a design.

  2. Sensitivity analysis of the Two Geometry Method

    International Nuclear Information System (INIS)

    Wichers, V.A.

    1993-09-01

    The Two Geometry Method (TGM) was designed specifically for the verification of the uranium enrichment of low enriched UF 6 gas in the presence of uranium deposits on the pipe walls. Complications can arise if the TGM is applied under extreme conditions, such as deposits larger than several times the gas activity, small pipe diameters less than 40 mm and low pressures less than 150 Pa. This report presents a comprehensive sensitivity analysis of the TGM. The impact of the various sources of uncertainty on the performance of the method is discussed. The application to a practical case is based on worst case conditions with regards to the measurement conditions, and on realistic conditions with respect to the false alarm probability and the non detection probability. Monte Carlo calculations were used to evaluate the sensitivity for sources of uncertainty which are experimentally inaccessible. (orig.)

  3. Parametric-based thermodynamic analysis of organic Rankine cycle as bottoming cycle for combined-cycle power plant

    International Nuclear Information System (INIS)

    Qureshi, S.; Memon, A.G.; Abbasi, A.F.

    2017-01-01

    In Pakistan, the thermal efficiency of the power plants is low because of a huge share of fuel energy is dumped into the atmosphere as waste heat. The ORC (Organic Rankine Cycle) has been revealed as one of the promising technologies to recover waste heat to enhance the thermal efficiency of the power plant. In current work, ORC is proposed as a second bottoming cycle for existing CCPP (Combined Cycle Power Plant). In order to assess the efficiency of the plant, a thermodynamic model is developed in the ESS (Engineering Equation Solver) software. The developed model is used for parametric analysis to assess the effects of various operating parameters on the system performance. The analysis of results shows that the integration of ORC system with existing CCPP system enhances the overall power output in the range of 150.5-154.58 MW with 0.24-5% enhancement in the efficiency depending on the operating conditions. During the parametric analysis of ORC, it is observed that inlet pressure of the turbine shows a significant effect on the performance of the system as compared to other operating parameters. (author)

  4. PARAMETRIC DRAWINGS VS. AUTOLISP

    Directory of Open Access Journals (Sweden)

    PRUNĂ Liviu

    2015-06-01

    Full Text Available In this paper the authors make a critical analysis of the advantages offered by the parametric drawing use by comparison with the AutoLISP computer programs used when it comes about the parametric design. Studying and analysing these two work models the authors have got to some ideas and conclusions which should be considered in the moment in that someone must to decide if it is the case to elaborate a software, using the AutoLISP language, or to establish the base rules that must be followed by the drawing, in the idea to construct outlines or blocks which can be used in the projection process.

  5. PARAMETRIC DRAWINGS VS. AUTOLISP

    OpenAIRE

    PRUNĂ Liviu; SLONOVSCHI Andrei

    2015-01-01

    In this paper the authors make a critical analysis of the advantages offered by the parametric drawing use by comparison with the AutoLISP computer programs used when it comes about the parametric design. Studying and analysing these two work models the authors have got to some ideas and conclusions which should be considered in the moment in that someone must to decide if it is the case to elaborate a software, using the AutoLISP language, or to establish the base rules that must be followed...

  6. Robust non-parametric one-sample tests for the analysis of recurrent events.

    Science.gov (United States)

    Rebora, Paola; Galimberti, Stefania; Valsecchi, Maria Grazia

    2010-12-30

    One-sample non-parametric tests are proposed here for inference on recurring events. The focus is on the marginal mean function of events and the basis for inference is the standardized distance between the observed and the expected number of events under a specified reference rate. Different weights are considered in order to account for various types of alternative hypotheses on the mean function of the recurrent events process. A robust version and a stratified version of the test are also proposed. The performance of these tests was investigated through simulation studies under various underlying event generation processes, such as homogeneous and nonhomogeneous Poisson processes, autoregressive and renewal processes, with and without frailty effects. The robust versions of the test have been shown to be suitable in a wide variety of event generating processes. The motivating context is a study on gene therapy in a very rare immunodeficiency in children, where a major end-point is the recurrence of severe infections. Robust non-parametric one-sample tests for recurrent events can be useful to assess efficacy and especially safety in non-randomized studies or in epidemiological studies for comparison with a standard population. Copyright © 2010 John Wiley & Sons, Ltd.

  7. Nonlinear dynamic analysis of cantilevered piezoelectric energy harvesters under simultaneous parametric and external excitations

    Science.gov (United States)

    Fang, Fei; Xia, Guanghui; Wang, Jianguo

    2018-02-01

    The nonlinear dynamics of cantilevered piezoelectric beams is investigated under simultaneous parametric and external excitations. The beam is composed of a substrate and two piezoelectric layers and assumed as an Euler-Bernoulli model with inextensible deformation. A nonlinear distributed parameter model of cantilevered piezoelectric energy harvesters is proposed using the generalized Hamilton's principle. The proposed model includes geometric and inertia nonlinearity, but neglects the material nonlinearity. Using the Galerkin decomposition method and harmonic balance method, analytical expressions of the frequency-response curves are presented when the first bending mode of the beam plays a dominant role. Using these expressions, we investigate the effects of the damping, load resistance, electromechanical coupling, and excitation amplitude on the frequency-response curves. We also study the difference between the nonlinear lumped-parameter and distributed-parameter model for predicting the performance of the energy harvesting system. Only in the case of parametric excitation, we demonstrate that the energy harvesting system has an initiation excitation threshold below which no energy can be harvested. We also illustrate that the damping and load resistance affect the initiation excitation threshold.

  8. Value at risk (VaR in uncertainty: Analysis with parametric method and black & scholes simulations

    Directory of Open Access Journals (Sweden)

    Humberto Banda Ortiz

    2014-07-01

    Full Text Available VaR is the most accepted risk measure worldwide and the leading reference in any risk management assessment. However, its methodology has important limitations which makes it unreliable in contexts of crisis or high uncertainty. For this reason, the aim of this work is to test the VaR accuracy when is employed in contexts of volatility, for which we compare the VaR outcomes in scenarios of both stability and uncertainty, using the parametric method and a historical simulation based on data generated with the Black & Scholes model. VaR main objective is the prediction of the highest expected loss for any given portfolio, but even when it is considered a useful tool for risk management under conditions of markets stability, we found that it is substantially inaccurate in contexts of crisis or high uncertainty. In addition, we found that the Black & Scholes simulations lead to underestimate the expected losses, in comparison with the parametric method and we also found that those disparities increase substantially in times of crisis. In the first section of this work we present a brief context of risk management in finance. In section II we present the existent literature relative to the VaR concept, its methods and applications. In section III we describe the methodology and assumptions used in this work. Section IV is dedicated to expose the findings. And finally, in Section V we present our conclusions.

  9. Parametric fMRI analysis of visual encoding in the human medial temporal lobe.

    Science.gov (United States)

    Rombouts, S A; Scheltens, P; Machielson, W C; Barkhof, F; Hoogenraad, F G; Veltman, D J; Valk, J; Witter, M P

    1999-01-01

    A number of functional brain imaging studies indicate that the medial temporal lobe system is crucially involved in encoding new information into memory. However, most studies were based on differences in brain activity between encoding of familiar vs. novel stimuli. To further study the underlying cognitive processes, we applied a parametric design of encoding. Seven healthy subjects were instructed to encode complex color pictures into memory. Stimuli were presented in a parametric fashion at different rates, thus representing different loads of encoding. Functional magnetic resonance imaging (fMRI) was used to assess changes in brain activation. To determine the number of pictures successfully stored into memory, recognition scores were determined afterwards. During encoding, brain activation occurred in the medial temporal lobe, comparable to the results obtained by others. Increasing the encoding load resulted in an increase in the number of successfully stored items. This was reflected in a significant increase in brain activation in the left lingual gyrus, in the left and right parahippocampal gyrus, and in the right inferior frontal gyrus. This study shows that fMRI can detect changes in brain activation during variation of one aspect of higher cognitive tasks. Further, it strongly supports the notion that the human medial temporal lobe is involved in encoding novel visual information into memory.

  10. Parametric instability analysis of truncated conical shells using the Haar wavelet method

    Science.gov (United States)

    Dai, Qiyi; Cao, Qingjie

    2018-05-01

    In this paper, the Haar wavelet method is employed to analyze the parametric instability of truncated conical shells under static and time dependent periodic axial loads. The present work is based on the Love first-approximation theory for classical thin shells. The displacement field is expressed as the Haar wavelet series in the axial direction and trigonometric functions in the circumferential direction. Then the partial differential equations are reduced into a system of coupled Mathieu-type ordinary differential equations describing dynamic instability behavior of the shell. Using Bolotin's method, the first-order and second-order approximations of principal instability regions are determined. The correctness of present method is examined by comparing the results with those in the literature and very good agreement is observed. The difference between the first-order and second-order approximations of principal instability regions for tensile and compressive loads is also investigated. Finally, numerical results are presented to bring out the influences of various parameters like static load factors, boundary conditions and shell geometrical characteristics on the domains of parametric instability of conical shells.

  11. Change of diffusion anisotropy in patients with acute cerebral infarction using statistical parametric analysis

    International Nuclear Information System (INIS)

    Morita, Naomi; Harada, Masafumi; Uno, Masaaki; Furutani, Kaori; Nishitani, Hiromu

    2006-01-01

    We conducted statistical parametric comparison of fractional anisotropy (FA) images and quantified FA values to determine whether significant change occurs in the ischemic region. The subjects were 20 patients seen within 24 h after onset of ischemia. For statistical comparison of FA images, a sample FA image was coordinated by the Talairach template, and each FA map was normalized. Statistical comparison was conducted using statistical parametric mapping (SPM) 99. Regions of interest were set in the same region on apparent diffusion coefficient (ADC) and FA maps, the region being consistent with the hyperintense region on diffusion-weighted images (DWIs). The contralateral region was also measured to obtain asymmetry ratios of ADC and FA. Regions with areas of statistical significance on FA images were found only in the white matter of three patients, although the regions were smaller than hyperintense regions on DWIs. The mean ADC and FA ratios were 0.64±0.16 and 0.93±0.09, respectively, and the degree of FA change was less than that of the ADC change. Significant change in diffusion anisotropy was limited to the severely infarcted core of the white matter. We believe statistical comparison of FA maps to be useful for detecting different regions of diffusion anisotropy. (author)

  12. Parametric instabilities in advanced gravitational wave detectors

    International Nuclear Information System (INIS)

    Gras, S; Zhao, C; Blair, D G; Ju, L

    2010-01-01

    As the LIGO interferometric gravitational wave detectors have finished gathering a large observational data set, an intense effort is underway to upgrade these observatories to improve their sensitivity by a factor of ∼10. High circulating power in the arm cavities is required, which leads to the possibility of parametric instability due to three-mode opto-acoustic resonant interactions between the carrier, transverse optical modes and acoustic modes. Here, we present detailed numerical analysis of parametric instability in a configuration that is similar to Advanced LIGO. After examining parametric instability for a single three-mode interaction in detail, we examine instability for the best and worst cases, as determined by the resonance condition of transverse modes in the power and signal recycling cavities. We find that, in the best case, the dual recycling detector is substantially less susceptible to instability than a single cavity, but its susceptibility is dependent on the signal recycling cavity design, and on tuning for narrow band operation. In all cases considered, the interferometer will experience parametric instability at full power operation, but the gain varies from 3 to 1000, and the number of unstable modes varies between 7 and 30 per test mass. The analysis focuses on understanding the detector complexity in relation to opto-acoustic interactions, on providing insights that can enable predictions of the detector response to transient disturbances, and of variations in thermal compensation conditions.

  13. Development of a sensitivity and uncertainty analysis tool in R for parametrization of the APEX model

    Science.gov (United States)

    Hydrologic models are used to simulate the responses of agricultural systems to different inputs and management strategies to identify alternative management practices to cope up with future climate and/or geophysical changes. The Agricultural Policy/Environmental eXtender (APEX) is a model develope...

  14. A STABILITY AND SENSITIVITY ANALYSIS OF PARAMETRIC FUNCTIONS IN A SEDIMENTATION MODEL

    OpenAIRE

    CARLOS D. ACOSTA; RAIMUND BÜRGER; CARLOS E. MEJIA

    2014-01-01

    Este artículo se dedica a la identificación numérica confiable y eficiente de los parámetros que definen la función de flujo y el coeficiente de difusión en una ecuación diferencial parcial de tipo parabólico fuertemente degenerada que es la base de un modelo matemático para procesos de sedimentación-consolidación. Para esta ecuación, el problema de valor inicial con valores en la frontera (IBVP) en el que el flujo es nulo, describe el asentamiento de una suspensión en una columna. Los paráme...

  15. Sensitivity analysis of reactive ecological dynamics.

    Science.gov (United States)

    Verdy, Ariane; Caswell, Hal

    2008-08-01

    Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.

  16. Global sensitivity analysis using polynomial chaos expansions

    International Nuclear Information System (INIS)

    Sudret, Bruno

    2008-01-01

    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices

  17. Global sensitivity analysis using polynomial chaos expansions

    Energy Technology Data Exchange (ETDEWEB)

    Sudret, Bruno [Electricite de France, R and D Division, Site des Renardieres, F 77818 Moret-sur-Loing Cedex (France)], E-mail: bruno.sudret@edf.fr

    2008-07-15

    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices.

  18. Contributions to sensitivity analysis and generalized discriminant analysis

    International Nuclear Information System (INIS)

    Jacques, J.

    2005-12-01

    Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)

  19. Simple Sensitivity Analysis for Orion GNC

    Science.gov (United States)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  20. Sensitivity analysis of floating offshore wind farms

    International Nuclear Information System (INIS)

    Castro-Santos, Laura; Diaz-Casas, Vicente

    2015-01-01

    Highlights: • Develop a sensitivity analysis of a floating offshore wind farm. • Influence on the life-cycle costs involved in a floating offshore wind farm. • Influence on IRR, NPV, pay-back period, LCOE and cost of power. • Important variables: distance, wind resource, electric tariff, etc. • It helps to investors to take decisions in the future. - Abstract: The future of offshore wind energy will be in deep waters. In this context, the main objective of the present paper is to develop a sensitivity analysis of a floating offshore wind farm. It will show how much the output variables can vary when the input variables are changing. For this purpose two different scenarios will be taken into account: the life-cycle costs involved in a floating offshore wind farm (cost of conception and definition, cost of design and development, cost of manufacturing, cost of installation, cost of exploitation and cost of dismantling) and the most important economic indexes in terms of economic feasibility of a floating offshore wind farm (internal rate of return, net present value, discounted pay-back period, levelized cost of energy and cost of power). Results indicate that the most important variables in economic terms are the number of wind turbines and the distance from farm to shore in the costs’ scenario, and the wind scale parameter and the electric tariff for the economic indexes. This study will help investors to take into account these variables in the development of floating offshore wind farms in the future

  1. Energy and parametric analysis of solar absorption cooling systems in various Moroccan climates

    Directory of Open Access Journals (Sweden)

    Y. Agrouaz

    2017-03-01

    Full Text Available The aim of this work is to investigate the energetic performance of a solar cooling system using absorption technology under Moroccan climate. The solar fraction and the coefficient of performance of the solar cooling system were evaluated for various climatic conditions. It is found that the system operating in Errachidia shows the best average annual solar fraction (of 30% and COP (of 0.33 owing to the high solar capabilities of this region. Solar fraction values in other regions varied between 19% and 23%. Moreover, the coefficient of performance values shows in the same regions a significant variation from 0.12 to 0.33 all over the year. A detailed parametric study was as well carried out to evidence the effect of the operating and design parameters on the solar air conditioner performance.

  2. Parametric study of unconstrained high-pressure torsion- Finite element analysis

    International Nuclear Information System (INIS)

    Halloumi, A; Busquet, M; Descartes, S

    2014-01-01

    The high-pressure torsion (HPT) experiments have been investigated numerically. An axisymmetric model with twist was developed with commercial finite element software (Abaqus) to study locally the specificity of the stress and strain history within the transformed layers produced during HPT processing. The material local behaviour law in the plastic domain was modelled. A parametric study highlights the role of the imposed parameters (friction coefficient at the interfaces anvil surfaces/sample, imposed pressure) on the stress/strain distribution in the sample bulk for two materials: ultra-high purity iron and steel grade R260. The present modelling provides a tool to investigate and to analyse the effect of pressure and friction on the local stress and strain history during the HPT process and to couple with experimental results

  3. ANALYSIS OF FUZZY QUEUES: PARAMETRIC PROGRAMMING APPROACH BASED ON RANDOMNESS - FUZZINESS CONSISTENCY PRINCIPLE

    Directory of Open Access Journals (Sweden)

    Dhruba Das

    2015-04-01

    Full Text Available In this article, based on Zadeh’s extension principle we have apply the parametric programming approach to construct the membership functions of the performance measures when the interarrival time and the service time are fuzzy numbers based on the Baruah’s Randomness- Fuzziness Consistency Principle. The Randomness-Fuzziness Consistency Principle leads to defining a normal law of fuzziness using two different laws of randomness. In this article, two fuzzy queues FM/M/1 and M/FM/1 has been studied and constructed their membership functions of the system characteristics based on the aforesaid principle. The former represents a queue with fuzzy exponential arrivals and exponential service rate while the latter represents a queue with exponential arrival rate and fuzzy exponential service rate.

  4. Analysis of Parametric Effects on Efficiency of the Brown Stock Washer in Paper Industry Using MATLAB

    Science.gov (United States)

    Kumar, Deepak; Kumar, Vivek; Singh, V. P.

    2009-07-01

    In the present paper, the effects of cake thickness and time on the efficiency of brown stock washer of the paper mill are studied by using mathematical model of pulp washing for the species of sodium and lignin ions. The mechanism of the diffusion- dispersion washing of the bed of the pulp fibers is mathematically modeled by the basic material balance and adsorption isotherm is used to describe the equilibrium between the concentration of the solute in the liquor and concentration of the solute on the fibers. To study the parametric effect, numerical solutions of the axial domain of the system governed by partial differential equations (transport and isotherm equations) for different boundary conditions are obtained by the "pdepe" solver in MATLAB source code. The effects of both the parameters are shown by three dimensional graphical representation as well as concentration profiles.

  5. SFM-FDTD analysis of triangular-lattice AAA structure: Parametric study of the TEM mode

    Science.gov (United States)

    Hamidi, M.; Chemrouk, C.; Belkhir, A.; Kebci, Z.; Ndao, A.; Lamrous, O.; Baida, F. I.

    2014-05-01

    This theoretical work reports a parametric study of enhanced transmission through annular aperture array (AAA) structure arranged in a triangular lattice. The effect of the incidence angle in addition to the inner and outer radii values on the evolution of the transmission spectra is carried out. To this end, a 3D Finite-Difference Time-Domain code based on the Split Field Method (SFM) is used to calculate the spectral response of the structure for any angle of incidence. In order to work through an orthogonal unit cell which presents the advantage to reduce time and space of computation, special periodic boundary conditions are implemented. This study provides a new modeling of AAA structures useful for producing tunable ultra-compact devices.

  6. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models

    NARCIS (Netherlands)

    Degeling, Koen; Ijzerman, Maarten J.; Koopman, Miriam; Koffijberg, Hendrik

    2017-01-01

    Background: Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive

  7. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models

    NARCIS (Netherlands)

    Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik

    2017-01-01

    Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive

  8. Parametric analysis of the growth of colloidal ZnO nanoparticles synthesized in alcoholic medium

    International Nuclear Information System (INIS)

    Fonseca, A. S.; Figueira, P. A.; Pereira, A. S.; Santos, R. J.; Trindade, T.; Nunes, M. I.

    2017-01-01

    The growth kinetics of nanosized ZnO was studied considering the influence of different parameters (mixing degree, temperature, alcohol chain length, reactant concentration and Zn/OH ratios) on the synthesis reaction and modelling the outputs using typical kinetic growth models, which were then evaluated by means of a sensitivity analysis. The Zn/OH ratio, the temperature and the alcohol chain length were found to be essential parameters to control the growth of ZnO nanoparticles, whereas zinc acetate concentration (for Zn/OH = 0.625) and the stirring during the ageing stage were shown to not have significant influence on the particle size growth. This last operational parameter was for the first time investigated for nanoparticles synthesized in 1-pentanol, and it is of outmost importance for the implementation of continuous industrial processes for mass production of nanosized ZnO and energy savings in the process. Concerning the nanoparticle growth modelling, the results show a different pattern from the more commonly accepted diffusion-limited Ostwald ripening process, i.e. the Lifshitz–Slyozov–Wagner (LSW) model. Indeed, this study shows that oriented attachment occurs during the early stages whereas for the later stages the particle growth is well represented by the LSW model. This conclusion contributes to clarify some controversy found in the literature regarding the kinetic model which better represents the ZnO NPs’ growth in alcoholic medium.

  9. Parametric analysis of the growth of colloidal ZnO nanoparticles synthesized in alcoholic medium

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca, A. S. [National Research Centre for the Working Environment (Denmark); Figueira, P. A.; Pereira, A. S. [Universidade de Aveiro, Departamento de Química—CICECO (Portugal); Santos, R. J. [Universidade do Porto, Laboratory of Separation and Reaction Engineering-Laboratory of Catalysis and Materials (LSRE-LCM), Faculdade de Engenharia (Portugal); Trindade, T. [Universidade de Aveiro, Departamento de Química—CICECO (Portugal); Nunes, M. I., E-mail: isanunes@ua.pt [Universidade de Aveiro, Centre for Environmental and Marine Studies (CESAM), Dep. de Ambiente e Ordenamento (Portugal)

    2017-02-15

    The growth kinetics of nanosized ZnO was studied considering the influence of different parameters (mixing degree, temperature, alcohol chain length, reactant concentration and Zn/OH ratios) on the synthesis reaction and modelling the outputs using typical kinetic growth models, which were then evaluated by means of a sensitivity analysis. The Zn/OH ratio, the temperature and the alcohol chain length were found to be essential parameters to control the growth of ZnO nanoparticles, whereas zinc acetate concentration (for Zn/OH = 0.625) and the stirring during the ageing stage were shown to not have significant influence on the particle size growth. This last operational parameter was for the first time investigated for nanoparticles synthesized in 1-pentanol, and it is of outmost importance for the implementation of continuous industrial processes for mass production of nanosized ZnO and energy savings in the process. Concerning the nanoparticle growth modelling, the results show a different pattern from the more commonly accepted diffusion-limited Ostwald ripening process, i.e. the Lifshitz–Slyozov–Wagner (LSW) model. Indeed, this study shows that oriented attachment occurs during the early stages whereas for the later stages the particle growth is well represented by the LSW model. This conclusion contributes to clarify some controversy found in the literature regarding the kinetic model which better represents the ZnO NPs’ growth in alcoholic medium.

  10. Exploratory market structure analysis. Topology-sensitive methodology.

    OpenAIRE

    Mazanec, Josef

    1999-01-01

    Given the recent abundance of brand choice data from scanner panels market researchers have neglected the measurement and analysis of perceptions. Heterogeneity of perceptions is still a largely unexplored issue in market structure and segmentation studies. Over the last decade various parametric approaches toward modelling segmented perception-preference structures such as combined MDS and Latent Class procedures have been introduced. These methods, however, are not taylored for qualitative ...

  11. Sensitivity analysis of a modified energy model

    International Nuclear Information System (INIS)

    Suganthi, L.; Jagadeesan, T.R.

    1997-01-01

    Sensitivity analysis is carried out to validate model formulation. A modified model has been developed to predict the future energy requirement of coal, oil and electricity, considering price, income, technological and environmental factors. The impact and sensitivity of the independent variables on the dependent variable are analysed. The error distribution pattern in the modified model as compared to a conventional time series model indicated the absence of clusters. The residual plot of the modified model showed no distinct pattern of variation. The percentage variation of error in the conventional time series model for coal and oil ranges from -20% to +20%, while for electricity it ranges from -80% to +20%. However, in the case of the modified model the percentage variation in error is greatly reduced - for coal it ranges from -0.25% to +0.15%, for oil -0.6% to +0.6% and for electricity it ranges from -10% to +10%. The upper and lower limit consumption levels at 95% confidence is determined. The consumption at varying percentage changes in price and population are analysed. The gap between the modified model predictions at varying percentage changes in price and population over the years from 1990 to 2001 is found to be increasing. This is because of the increasing rate of energy consumption over the years and also the confidence level decreases as the projection is made far into the future. (author)

  12. Sensitivity Analysis for Design Optimization Integrated Software Tools, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposed project is to provide a new set of sensitivity analysis theory and codes, the Sensitivity Analysis for Design Optimization Integrated...

  13. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  14. A new importance measure for sensitivity analysis

    International Nuclear Information System (INIS)

    Liu, Qiao; Homma, Toshimitsu

    2010-01-01

    Uncertainty is an integral part of risk assessment of complex engineering systems, such as nuclear power plants and space crafts. The aim of sensitivity analysis is to identify the contribution of the uncertainty in model inputs to the uncertainty in the model output. In this study, a new importance measure that characterizes the influence of the entire input distribution on the entire output distribution was proposed. It represents the expected deviation of the cumulative distribution function (CDF) of the model output that would be obtained when one input parameter of interest were known. The applicability of this importance measure was tested with two models, a nonlinear nonmonotonic mathematical model and a risk model. In addition, a comparison of this new importance measure with several other importance measures was carried out and the differences between these measures were explained. (author)

  15. DEA Sensitivity Analysis for Parallel Production Systems

    Directory of Open Access Journals (Sweden)

    J. Gerami

    2011-06-01

    Full Text Available In this paper, we introduce systems consisting of several production units, each of which include several subunits working in parallel. Meanwhile, each subunit is working independently. The input and output of each production unit are the sums of the inputs and outputs of its subunits, respectively. We consider each of these subunits as an independent decision making unit(DMU and create the production possibility set(PPS produced by these DMUs, in which the frontier points are considered as efficient DMUs. Then we introduce models for obtaining the efficiency of the production subunits. Using super-efficiency models, we categorize all efficient subunits into different efficiency classes. Then we follow by presenting the sensitivity analysis and stability problem for efficient subunits, including extreme efficient and non-extreme efficient subunits, assuming simultaneous perturbations in all inputs and outputs of subunits such that the efficiency of the subunit under evaluation declines while the efficiencies of other subunits improve.

  16. Sensitivity of SBLOCA analysis to model nodalization

    International Nuclear Information System (INIS)

    Lee, C.; Ito, T.; Abramson, P.B.

    1983-01-01

    The recent Semiscale test S-UT-8 indicates the possibility for primary liquid to hang up in the steam generators during a SBLOCA, permitting core uncovery prior to loop-seal clearance. In analysis of Small Break Loss of Coolant Accidents with RELAP5, it is found that resultant transient behavior is quite sensitive to the selection of nodalization for the steam generators. Although global parameters such as integrated mass loss, primary inventory and primary pressure are relatively insensitive to the nodalization, it is found that the predicted distribution of inventory around the primary is significantly affected by nodalization. More detailed nodalization predicts that more of the inventory tends to remain in the steam generators, resulting in less inventory in the reactor vessel and therefore causing earlier and more severe core uncovery

  17. Subset simulation for structural reliability sensitivity analysis

    International Nuclear Information System (INIS)

    Song Shufang; Lu Zhenzhou; Qiao Hongwei

    2009-01-01

    Based on two procedures for efficiently generating conditional samples, i.e. Markov chain Monte Carlo (MCMC) simulation and importance sampling (IS), two reliability sensitivity (RS) algorithms are presented. On the basis of reliability analysis of Subset simulation (Subsim), the RS of the failure probability with respect to the distribution parameter of the basic variable is transformed as a set of RS of conditional failure probabilities with respect to the distribution parameter of the basic variable. By use of the conditional samples generated by MCMC simulation and IS, procedures are established to estimate the RS of the conditional failure probabilities. The formulae of the RS estimator, its variance and its coefficient of variation are derived in detail. The results of the illustrations show high efficiency and high precision of the presented algorithms, and it is suitable for highly nonlinear limit state equation and structural system with single and multiple failure modes

  18. Generation of broadly tunable picosecond mid-infrared laser and sensitive detection of a mid-infrared signal by parametric frequency up-conversion in MgO:LiNbO3 optical parametric amplifiers

    International Nuclear Information System (INIS)

    Zhang Qiu-Lin; Zhang Jing; Qiu Kang-Sheng; Zhang Dong-Xiang; Feng Bao-Hua; Zhang Jing-Yuan

    2012-01-01

    Picosecond optical parametric generation and amplification in the near-infrared region within 1.361–1.656 μm and the mid-infrared region within 2.976–4.875 μm is constructed on the basis of bulk MgO:LiNbO 3 crystals pumped at 1.064 μm. The maximum pulse energy reaches 1.3 mJ at 1.464 μm and 0.47 mJ at 3.894 μm, corresponding to a pump-to-idler photon conversion efficiency of 25%. By seeding the hard-to-measure mid-infrared radiation as the idler in the optical parametric amplification and measuring the amplified and frequency up-converted signal in the near-infrared or even visible region, one can measure very week mid-infrared radiation with ordinary detectors, which are insensitive to mid-infrared radiation, with a very high gain. A maximum gain factor of about 7 × 10 7 is achieved at the mid-infrared wavelength of 3.374 μm and the corresponding energy detection limit is as low as about 390 aJ per pulse. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  19. Comprehensive analysis and parametric optimization of a CCP (combined cooling and power) system driven by geothermal source

    International Nuclear Information System (INIS)

    Zhao, Yajing; Wang, Jiangfeng; Cao, Liyan; Wang, Yu

    2016-01-01

    A CCP (combined cooling and power) system, which integrated a flash-binary power generation system with a bottom combined cooling and power subsystem operating through the combination of an organic Rankine cycle and an ejector refrigeration cycle, was developed to utilize geothermal energy. Thermodynamic and exergoeconomic analyses were performed on the system. A performance indicator, namely the average levelized costs per unit of exergy products for the overall system, was developed to assess the exergoeconomic performance of the system. The effects of four key parameters including flash pressure, pinch point temperature difference in the vapor generator, inlet pressure and back pressure of the ORC turbine on the system performance were evaluated through a parametric analysis. Two single-objective optimizations were conducted to reach the maximum exergy efficiency and the minimum average levelized costs per unit of exergy products for the overall system, respectively. The optimization results implied that the most exergoeconomically effective system couldn't obtain the best system thermodynamic performance and vice versa. An exergy analysis based on the thermodynamic optimization result revealed that the biggest exergy destruction occurred in the vapor generator and the next two largest exergy destruction were respectively caused by the steam turbine and the flashing device. - Highlights: • A CCP (combined cooling and power) system driven by geothermal source is developed. • Levelized costs per unit of exergy product is used as the exergoeconomic indicator. • Parametric analyses are performed from thermodynamic and exergoeconomic viewpoints. • The optimal exergoeconomic design cannot obtain the best thermodynamic performance. • Exergy analysis is carried out based on the thermodynamic optimization result.

  20. Multi-Parametric MRI and Texture Analysis to Visualize Spatial Histologic Heterogeneity and Tumor Extent in Glioblastoma.

    Directory of Open Access Journals (Sweden)

    Leland S Hu

    Full Text Available Genetic profiling represents the future of neuro-oncology but suffers from inadequate biopsies in heterogeneous tumors like Glioblastoma (GBM. Contrast-enhanced MRI (CE-MRI targets enhancing core (ENH but yields adequate tumor in only ~60% of cases. Further, CE-MRI poorly localizes infiltrative tumor within surrounding non-enhancing parenchyma, or brain-around-tumor (BAT, despite the importance of characterizing this tumor segment, which universally recurs. In this study, we use multiple texture analysis and machine learning (ML algorithms to analyze multi-parametric MRI, and produce new images indicating tumor-rich targets in GBM.We recruited primary GBM patients undergoing image-guided biopsies and acquired pre-operative MRI: CE-MRI, Dynamic-Susceptibility-weighted-Contrast-enhanced-MRI, and Diffusion Tensor Imaging. Following image coregistration and region of interest placement at biopsy locations, we compared MRI metrics and regional texture with histologic diagnoses of high- vs low-tumor content (≥80% vs <80% tumor nuclei for corresponding samples. In a training set, we used three texture analysis algorithms and three ML methods to identify MRI-texture features that optimized model accuracy to distinguish tumor content. We confirmed model accuracy in a separate validation set.We collected 82 biopsies from 18 GBMs throughout ENH and BAT. The MRI-based model achieved 85% cross-validated accuracy to diagnose high- vs low-tumor in the training set (60 biopsies, 11 patients. The model achieved 81.8% accuracy in the validation set (22 biopsies, 7 patients.Multi-parametric MRI and texture analysis can help characterize and visualize GBM's spatial histologic heterogeneity to identify regional tumor-rich biopsy targets.

  1. The heavy-duty vehicle future in the United States: A parametric analysis of technology and policy tradeoffs

    International Nuclear Information System (INIS)

    Askin, Amanda C.; Barter, Garrett E.; West, Todd H.; Manley, Dawn K.

    2015-01-01

    We present a parametric analysis of factors that can influence advanced fuel and technology deployments in U.S. Class 7–8 trucks through 2050. The analysis focuses on the competition between traditional diesel trucks, natural gas vehicles (NGVs), and ultra-efficient powertrains. Underlying the study is a vehicle choice and stock model of the U.S. heavy-duty vehicle market. The model is segmented by vehicle class, body type, powertrain, fleet size, and operational type. We find that conventional diesel trucks will dominate the market through 2050, but NGVs could have significant market penetration depending on key technological and economic uncertainties. Compressed natural gas trucks conducting urban trips in fleets that can support private infrastructure are economically viable now and will continue to gain market share. Ultra-efficient diesel trucks, exemplified by the U.S. Department of Energy's SuperTruck program, are the preferred alternative in the long haul segment, but could compete with liquefied natural gas (LNG) trucks if the fuel price differential between LNG and diesel increases. However, the greatest impact in reducing petroleum consumption and pollutant emissions is had by investing in efficiency technologies that benefit all powertrains, especially the conventional diesels that comprise the majority of the stock, instead of incentivizing specific alternatives. -- Highlights: •We present a parametric analysis of factors U.S. Class 7–8 trucks through 2050. •Conventional diesels will be more than 70% of U.S. heavy-duty vehicles through 2050. •CNG trucks are well suited to large, urban fleets with private refueling. •Ultra-efficient long haul diesel trucks are preferred over LNG at current fuel prices

  2. Parametric analysis of a combined dew point evaporative-vapour compression based air conditioning system

    Directory of Open Access Journals (Sweden)

    Shailendra Singh Chauhan

    2016-09-01

    Full Text Available A dew point evaporative-vapour compression based combined air conditioning system for providing good human comfort conditions at a low cost has been proposed in this paper. The proposed system has been parametrically analysed for a wide range of ambient temperatures and specific humidity under some reasonable assumptions. The proposed system has also been compared from the conventional vapour compression air conditioner on the basis of cooling load on the cooling coil working on 100% fresh air assumption. The saving of cooling load on the coil was found to be maximum with a value of 60.93% at 46 °C and 6 g/kg specific humidity, while it was negative for very high humidity of ambient air, which indicates that proposed system is applicable for dry and moderate humid conditions but not for very humid conditions. The system is working well with an average net monthly power saving of 192.31 kW h for hot and dry conditions and 124.38 kW h for hot and moderate humid conditions. Therefore it could be a better alternative for dry and moderate humid climate with a payback period of 7.2 years.

  3. Brain SPECT analysis using statistical parametric mapping in patients with posttraumatic stress disorder

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Euy Neyng; Sohn, Hyung Sun; Kim, Sung Hoon; Chung, Soo Kyo; Yang, Dong Won [College of Medicine, The Catholic Univ. of Korea, Seoul (Korea, Republic of)

    2001-07-01

    This study investigated alterations in regional cerebral blood flow (rCBF) in patients with posttraumatic stress disorder (PTSD) using statistical parametric mapping (SPM99). Noninvasive rCBF measurements using {sup 99m}Tc-ethyl cysteinate dimer (ECD) SPECT were performed on 23 patients with PTSD and 21 age matched normal controls without re-exposure to accident-related stimuli. The relative rCBF maps in patients with PTSD and controls were compared. In patients with PTSD, significant increased rCBF was found along the limbic system in the brain. There were a few foci of decreased rCBF in the superior frontal gyrus, parietal and temporal region. PTSD is associated with increased rCBF in limbic areas compared with age-matched normal controls. These findings implicate regions of the limbic brain, which may mediate the response to aversive stimuli in healthy individuals, play on important role in patients suffering from PTSD and suggest that ongoing hyperfunction of 'overlearned survival response' or flashbacks response in these regions after painful, life threatening, or horrifying events without re-exposure to same traumatic stimulus.

  4. Brain SPECT analysis using statistical parametric mapping in patients with posttraumatic stress disorder

    International Nuclear Information System (INIS)

    Kim, Euy Neyng; Sohn, Hyung Sun; Kim, Sung Hoon; Chung, Soo Kyo; Yang, Dong Won

    2001-01-01

    This study investigated alterations in regional cerebral blood flow (rCBF) in patients with posttraumatic stress disorder (PTSD) using statistical parametric mapping (SPM99). Noninvasive rCBF measurements using 99m Tc-ethyl cysteinate dimer (ECD) SPECT were performed on 23 patients with PTSD and 21 age matched normal controls without re-exposure to accident-related stimuli. The relative rCBF maps in patients with PTSD and controls were compared. In patients with PTSD, significant increased rCBF was found along the limbic system in the brain. There were a few foci of decreased rCBF in the superior frontal gyrus, parietal and temporal region. PTSD is associated with increased rCBF in limbic areas compared with age-matched normal controls. These findings implicate regions of the limbic brain, which may mediate the response to aversive stimuli in healthy individuals, play on important role in patients suffering from PTSD and suggest that ongoing hyperfunction of 'overlearned survival response' or flashbacks response in these regions after painful, life threatening, or horrifying events without re-exposure to same traumatic stimulus

  5. Parametric analysis of parameters for electrical-load forecasting using artificial neural networks

    Science.gov (United States)

    Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael

    1997-04-01

    Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.

  6. Calibration, validation, and sensitivity analysis: What's what

    International Nuclear Information System (INIS)

    Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.

    2006-01-01

    One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty

  7. Global sensitivity analysis in wind energy assessment

    Science.gov (United States)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present

  8. Frontier Assignment for Sensitivity Analysis of Data Envelopment Analysis

    Science.gov (United States)

    Naito, Akio; Aoki, Shingo; Tsuji, Hiroshi

    To extend the sensitivity analysis capability for DEA (Data Envelopment Analysis), this paper proposes frontier assignment based DEA (FA-DEA). The basic idea of FA-DEA is to allow a decision maker to decide frontier intentionally while the traditional DEA and Super-DEA decide frontier computationally. The features of FA-DEA are as follows: (1) provides chances to exclude extra-influential DMU (Decision Making Unit) and finds extra-ordinal DMU, and (2) includes the function of the traditional DEA and Super-DEA so that it is able to deal with sensitivity analysis more flexibly. Simple numerical study has shown the effectiveness of the proposed FA-DEA and the difference from the traditional DEA.

  9. Sensitivity analysis of Smith's AMRV model

    International Nuclear Information System (INIS)

    Ho, Chih-Hsiang

    1995-01-01

    Multiple-expert hazard/risk assessments have considerable precedent, particularly in the Yucca Mountain site characterization studies. In this paper, we present a Bayesian approach to statistical modeling in volcanic hazard assessment for the Yucca Mountain site. Specifically, we show that the expert opinion on the site disruption parameter p is elicited on the prior distribution, π (p), based on geological information that is available. Moreover, π (p) can combine all available geological information motivated by conflicting but realistic arguments (e.g., simulation, cluster analysis, structural control, etc.). The incorporated uncertainties about the probability of repository disruption p, win eventually be averaged out by taking the expectation over π (p). We use the following priors in the analysis: priors chosen for mathematical convenience: Beta (r, s) for (r, s) = (2, 2), (3, 3), (5, 5), (2, 1), (2, 8), (8, 2), and (1, 1); and three priors motivated by expert knowledge. Sensitivity analysis is performed for each prior distribution. Estimated values of hazard based on the priors chosen for mathematical simplicity are uniformly higher than those obtained based on the priors motivated by expert knowledge. And, the model using the prior, Beta (8,2), yields the highest hazard (= 2.97 X 10 -2 ). The minimum hazard is produced by the open-quotes three-expert priorclose quotes (i.e., values of p are equally likely at 10 -3 10 -2 , and 10 -1 ). The estimate of the hazard is 1.39 x which is only about one order of magnitude smaller than the maximum value. The term, open-quotes hazardclose quotes, is defined as the probability of at least one disruption of a repository at the Yucca Mountain site by basaltic volcanism for the next 10,000 years

  10. Parametric Study Of Window Frame Geometry

    DEFF Research Database (Denmark)

    Zajas, Jan Jakub; Heiselberg, Per

    2013-01-01

    This paper describes a parametric study on window frame geometry with the goal of designing frames with very good thermal properties. Three different parametric frame models are introduced, deseribed by a number of variables. In the first part of the study, a process of sensitivity analysis...... is conducted to determine which of the parameters describing the frame have the highest impact on its thermal performance. Afterwards, an optimization process is conducted on each frame in order to optimize the design with regard to three objectives: minimizing the thermal transmittance, maxim izing the net...... energy gain factor and minimizing the material use. Since the objectives contradiet each other, it was found that it is not possible to identifY a single solution that satisfies all these goals. lnstead, a compromise between the objectives has to be found....

  11. Parametric analysis of an irreversible proton exchange membrane fuel cell/absorption refrigerator hybrid system

    International Nuclear Information System (INIS)

    Yang, Puqing; Zhang, Houcheng

    2015-01-01

    A hybrid system mainly consisting of a PEMFC (proton exchange membrane fuel cell) and an absorption refrigerator is proposed, where the PEMFC directly converts the chemical energy contained in the hydrogen into electrical and thermal energies, and the thermal energy is transferred to drive the bottoming absorption refrigerator for cooling purpose. By considering the existing irreversible losses in the hybrid system, the operating current density region of the PEMFC permits the absorption refrigerator to exert its function is determined and the analytical expressions for the equivalent power output and efficiency of the hybrid system under different operating conditions are specified. Numerical calculations show that the equivalent maximum power density and the corresponding efficiency of the hybrid system can be respectively increased by 5.3% and 6.8% compared to that of the stand-alone PEMFC. Comprehensive parametric analyses are conducted to reveal the effects of the internal irreversibility of the absorption refrigerator, operating current density, operating temperature and operating pressure of the PEMFC, and some integrated parameters related to the thermodynamic losses on the performance of the hybrid system. The model presented in the paper is more general than previous study, and the results for some special cases can be directly derived from this paper. - Highlights: • A CHP system composed of a PEMFC and an absorption refrigerator is proposed. • Current density region enables the absorption refrigerator to work is determined. • Multiple irreversible losses in the system are analytically characterized. • Maximum power density and corresponding efficiency can be increased by 5.3% and 6.8%. • Effects of some designing and operating parameters on the performance are discussed

  12. Global analysis and parametric dependencies for potential unintended hydrogen-fuel releases

    Energy Technology Data Exchange (ETDEWEB)

    Harstad, Kenneth; Bellan, Josette [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, M/S 125-109, Pasadena, CA 91109-8099 (United States)

    2006-01-01

    Global, simplified analyses of gaseous-hydrogen releases from a high-pressure vessel and liquid-hydrogen pools are conducted for two purposes: (1) establishing order-of-magnitude values of characteristic times and (2) determining parametric dependencies of these characteristic times on the physical properties of the configuration and on the thermophysical properties of hydrogen. According to the ratio of the characteristic release time to the characteristic mixing time, two limiting configurations are identified: (1) a rich cloud exists when this ratio is much smaller than unity, and (2) a jet exists when this ratio is much larger than unity. In all cases, it is found that the characteristic release time is proportional to the total released mass and inversely proportional to a characteristic area. The approximate size, convection velocity, and circulation time of unconfined burning-cloud releases scale with the cloud mass at powers 1/3, 1/6, and 1/6, respectively, multiplied by an appropriately dimensional constant; the influence of cross flow can only be important if its velocity exceeds that of internal convection. It is found that the fireball lifetime is approximately the maximum of the release time and thrice the convection-associated characteristic time. Transition from deflagration to detonation can occur only if the size of unconfined clouds exceeds by a factor of O(10) that of a characteristic detonation cell, which ranges from 0.015 m under stoichiometric conditions to approximately 1 m under extreme rich/lean conditions. For confined vapor pockets, transition occurs only for pocket sizes larger than the cell size. In jets, the release time is inversely proportional to the initial vessel pressure and has a square root dependence on the vessel temperature. Jet velocities are a factor of 10 larger than convective velocities in fireballs and combustion is possible only in the subsonic, downstream region where entrainment may occur.

  13. Wear-Out Sensitivity Analysis Project Abstract

    Science.gov (United States)

    Harris, Adam

    2015-01-01

    During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.

  14. Sensitivity analysis of ranked data: from order statistics to quantiles

    NARCIS (Netherlands)

    Heidergott, B.F.; Volk-Makarewicz, W.

    2015-01-01

    In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before

  15. SENSIT: a cross-section and design sensitivity and uncertainty analysis code

    International Nuclear Information System (INIS)

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE

  16. Multitarget global sensitivity analysis of n-butanol combustion.

    Science.gov (United States)

    Zhou, Dingyu D Y; Davis, Michael J; Skodje, Rex T

    2013-05-02

    A model for the combustion of butanol is studied using a recently developed theoretical method for the systematic improvement of the kinetic mechanism. The butanol mechanism includes 1446 reactions, and we demonstrate that it is straightforward and computationally feasible to implement a full global sensitivity analysis incorporating all the reactions. In addition, we extend our previous analysis of ignition-delay targets to include species targets. The combination of species and ignition targets leads to multitarget global sensitivity analysis, which allows for a more complete mechanism validation procedure than we previously implemented. The inclusion of species sensitivity analysis allows for a direct comparison between reaction pathway analysis and global sensitivity analysis.

  17. Sensitivity analysis in multi-parameter probabilistic systems

    International Nuclear Information System (INIS)

    Walker, J.R.

    1987-01-01

    Probabilistic methods involving the use of multi-parameter Monte Carlo analysis can be applied to a wide range of engineering systems. The output from the Monte Carlo analysis is a probabilistic estimate of the system consequence, which can vary spatially and temporally. Sensitivity analysis aims to examine how the output consequence is influenced by the input parameter values. Sensitivity analysis provides the necessary information so that the engineering properties of the system can be optimized. This report details a package of sensitivity analysis techniques that together form an integrated methodology for the sensitivity analysis of probabilistic systems. The techniques have known confidence limits and can be applied to a wide range of engineering problems. The sensitivity analysis methodology is illustrated by performing the sensitivity analysis of the MCROC rock microcracking model

  18. An ESDIRK Method with Sensitivity Analysis Capabilities

    DEFF Research Database (Denmark)

    Kristensen, Morten Rode; Jørgensen, John Bagterp; Thomsen, Per Grove

    2004-01-01

    of the sensitivity equations. A key feature is the reuse of information already computed for the state integration, hereby minimizing the extra effort required for sensitivity integration. Through case studies the new algorithm is compared to an extrapolation method and to the more established BDF based approaches...

  19. Sensitivity Analysis of Fire Dynamics Simulation

    DEFF Research Database (Denmark)

    Brohus, Henrik; Nielsen, Peter V.; Petersen, Arnkell J.

    2007-01-01

    (Morris method). The parameters considered are selected among physical parameters and program specific parameters. The influence on the calculation result as well as the CPU time is considered. It is found that the result is highly sensitive to many parameters even though the sensitivity varies...

  20. Analysis of the behavior of orthogonal-core-type push-pull parametric transformer with iron and copper losses. Tetsuson oyobi doson wo koryoshita chokko jishinkei push pull parametric hen prime atsuki no dosa kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Tajima, K; Anazawa, Y; Kaga, A [Akita University, Akita (Japan). Mining College; Ichinokura, O [Tohoku University, Sendai (Japan). Faculty of Engineering

    1991-04-30

    This paper reports on a precise numerical analysis of operating characteristics of the push-pull parametric transformer of orthogonal-core type (proposed by the authors in the preceding papers) made in consideration of both the iron loss of its magnetic core and the copper loss of its windings. A model of magnetic circuit in the core is presented, which involves magnetic reluctances representing saturation characteristics of the core and magnetic inductances representing effects produced by hysteresis. Use is made of the function that expresses the saturation characteristics by a twenty-first power series of magnetic flux, the coefficient of each term being determined by use of experimental data on a specified sample of the magnetic core. Furthermore, recourse is had to the circuit simulator SPICE in order to analyze the operating characteristics of the transformer. Comparing results of the present analysis with experimental results, the following are noted: first, both output voltages and currents of windings of the transformer under the condition of parametric oscillation are calculated with sufficient accuracy; second, the present analysis is capable of evaluating the conversion efficiency of electric power and input power factor of the transformer, and of providing more accurate values of both voltage and current in the case of the maximum output under loading conditions as compared with the analyses so far presented. 8 refs., 11 figs., 2 tabs.

  1. Estimating technical efficiency in the hospital sector with panel data: a comparison of parametric and non-parametric techniques.

    Science.gov (United States)

    Siciliani, Luigi

    2006-01-01

    Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.

  2. Sensitivity analysis for unobserved confounding of direct and indirect effects using uncertainty intervals.

    Science.gov (United States)

    Lindmark, Anita; de Luna, Xavier; Eriksson, Marie

    2018-05-10

    To estimate direct and indirect effects of an exposure on an outcome from observed data, strong assumptions about unconfoundedness are required. Since these assumptions cannot be tested using the observed data, a mediation analysis should always be accompanied by a sensitivity analysis of the resulting estimates. In this article, we propose a sensitivity analysis method for parametric estimation of direct and indirect effects when the exposure, mediator, and outcome are all binary. The sensitivity parameters consist of the correlations between the error terms of the exposure, mediator, and outcome models. These correlations are incorporated into the estimation of the model parameters and identification sets are then obtained for the direct and indirect effects for a range of plausible correlation values. We take the sampling variability into account through the construction of uncertainty intervals. The proposed method is able to assess sensitivity to both mediator-outcome confounding and confounding involving the exposure. To illustrate the method, we apply it to a mediation study based on the data from the Swedish Stroke Register (Riksstroke). An R package that implements the proposed method is available. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Superconducting Accelerating Cavity Pressure Sensitivity Analysis

    International Nuclear Information System (INIS)

    Rodnizki, J.; Horvits, Z.; Ben Aliz, Y.; Grin, A.; Weissman, L.

    2014-01-01

    The measured sensitivity of the cavity was evaluated and it is full consistent with the measured values. It was explored that the tuning system (the fog structure) has a significant contribution to the cavity sensitivity. By using ribs or by modifying the rigidity of the fog we may reduce the HWR sensitivity. During cool down and warming up we have to analyze the stresses on the HWR to avoid plastic deformation to the HWR since the Niobium yield is an order of magnitude lower in room temperature

  4. Optimal controls of building storage systems using both ice storage and thermal mass – Part II: Parametric analysis

    International Nuclear Information System (INIS)

    Hajiah, Ali; Krarti, Moncef

    2012-01-01

    Highlights: ► A detailed analysis is presented to assess the performance of thermal energy storage (TES) systems. ► Utility rates have been found to be significant in assessing the operation of TES systems. ► Optimal control strategies for TES systems can save up to 40% of total energy cost of office buildings. - Abstract: This paper presents the results of a series of parametric analysis to investigate the factors that affect the effectiveness of using simultaneously building thermal capacitance and ice storage system to reduce total operating costs (including energy and demand costs) while maintaining adequate occupant comfort conditions in buildings. The analysis is based on a validated model-based simulation environment and includes several parameters including the optimization cost function, base chiller size, and ice storage tank capacity, and weather conditions. It found that the combined use of building thermal mass and active thermal energy storage system can save up to 40% of the total energy costs when integrated optimal control are considered to operate commercial buildings.

  5. Derivative based sensitivity analysis of gamma index

    Directory of Open Access Journals (Sweden)

    Biplab Sarkar

    2015-01-01

    Full Text Available Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD and distance-to-agreement (DTA measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm, the point is included in the quantitative score as "pass." Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA against the RP, the first and second order derivatives of the DDs (δD', δD" between these two curves were derived and used as the

  6. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    Science.gov (United States)

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected

  7. Cluster analysis of quantitative parametric maps from DCE-MRI: application in evaluating heterogeneity of tumor response to antiangiogenic treatment.

    Science.gov (United States)

    Longo, Dario Livio; Dastrù, Walter; Consolino, Lorena; Espak, Miklos; Arigoni, Maddalena; Cavallo, Federica; Aime, Silvio

    2015-07-01

    The objective of this study was to compare a clustering approach to conventional analysis methods for assessing changes in pharmacokinetic parameters obtained from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) during antiangiogenic treatment in a breast cancer model. BALB/c mice bearing established transplantable her2+ tumors were treated with a DNA-based antiangiogenic vaccine or with an empty plasmid (untreated group). DCE-MRI was carried out by administering a dose of 0.05 mmol/kg of Gadocoletic acid trisodium salt, a Gd-based blood pool contrast agent (CA) at 1T. Changes in pharmacokinetic estimates (K(trans) and vp) in a nine-day interval were compared between treated and untreated groups on a voxel-by-voxel analysis. The tumor response to therapy was assessed by a clustering approach and compared with conventional summary statistics, with sub-regions analysis and with histogram analysis. Both the K(trans) and vp estimates, following blood-pool CA injection, showed marked and spatial heterogeneous changes with antiangiogenic treatment. Averaged values for the whole tumor region, as well as from the rim/core sub-regions analysis were unable to assess the antiangiogenic response. Histogram analysis resulted in significant changes only in the vp estimates (pclustering approach depicted marked changes in both the K(trans) and vp estimates, with significant spatial heterogeneity in vp maps in response to treatment (pclustered in three or four sub-regions. This study demonstrated the value of cluster analysis applied to pharmacokinetic DCE-MRI parametric maps for assessing tumor response to antiangiogenic therapy. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. MOVES2010a regional level sensitivity analysis

    Science.gov (United States)

    2012-12-10

    This document discusses the sensitivity of various input parameter effects on emission rates using the US Environmental Protection Agencys (EPAs) MOVES2010a model at the regional level. Pollutants included in the study are carbon monoxide (CO),...

  9. CADDIS Volume 4. Data Analysis: PECBO Appendix - R Scripts for Non-Parametric Regressions

    Science.gov (United States)

    Script for computing nonparametric regression analysis. Overview of using scripts to infer environmental conditions from biological observations, statistically estimating species-environment relationships, statistical scripts.

  10. Mokken scale analysis : Between the Guttman scale and parametric item response theory

    NARCIS (Netherlands)

    van Schuur, Wijbrandt H.

    2003-01-01

    This article introduces a model of ordinal unidimensional measurement known as Mokken scale analysis. Mokken scaling is based on principles of Item Response Theory (IRT) that originated in the Guttman scale. I compare the Mokken model with both Classical Test Theory (reliability or factor analysis)

  11. Interactive tool that empowers structural understanding and enables FEM analysis in a parametric design environment

    DEFF Research Database (Denmark)

    Christensen, Jesper Thøger; Parigi, Dario; Kirkegaard, Poul Henning

    2014-01-01

    This paper introduces an interactive tool developed to integrate structural analysis in the architectural design environment from the early conceptual design stage. The tool improves exchange of data between the design environment of Rhino Grasshopper and the FEM analysis of Autodesk Robot...... Structural Analysis. Further the tool provides intuitive setup and visual aids in order to facilitate the process. Enabling students and professionals to quickly analyze and evaluate multiple design variations. The tool has been developed inside the Performance Aided Design course at the Master...... of Architecture and Design at Aalborg University...

  12. SPECT image analysis using statistical parametric mapping in patients with temporal lobe epilepsy associated with hippocampal sclerosis

    International Nuclear Information System (INIS)

    Shiraki, Junko

    2004-01-01

    The author examined interictal 123 I-IMP SPECT images using statistical parametric mapping (SPM) in 19 temporal lobe epilepsy patients who revealed hippocampal sclerosis with MRI. Decreased regional cerebral blood flow (rCBF) were shown for eight patients in the medial temporal lobe, six patients in the lateral temporal lobe and five patients in the both medial and lateral temporal lobe. These patients were classified into two types; medial type and lateral type, the former decreased rCBF only in medial and the latter decreased rCBF in the other temporal area. Correlation of rCBF and clinical parameters in the lateral type, age at seizure onset was significantly older (p=0.0098, t-test) than those of patients in the medial type. SPM analysis for interictal SPECT of temporal lobe epilepsy clarified location of decreased rCBF and find correlations with clinical characteristics. In addition, SPM analysis of SPECT was useful to understand pathophysiology of the epilepsy. (author)

  13. Parametric Analysis of the Exergoeconomic Operation Costs, Environmental and Human Toxicity Indexes of the MF501F3 Gas Turbine

    Directory of Open Access Journals (Sweden)

    Edgar Vicente Torres-González

    2016-08-01

    Full Text Available This work presents an energetic, exergoeconomic, environmental, and toxicity analysis of the simple gas turbine M501F3 based on a parametric analysis of energetic (thermal efficiency, fuel and air flow rates, and specific work output, exergoeconomic (exergetic efficiency and exergoeconomic operation costs, environmental (global warming, smog formation, acid rain indexes, and human toxicity indexes, by taking the compressor pressure ratio and the turbine inlet temperature as the operating parameters. The aim of this paper is to provide an integral, systematic, and powerful diagnostic tool to establish possible operation and maintenance actions to improve the gas turbine’s exergoeconomic, environmental, and human toxicity indexes. Despite the continuous changes in the price of natural gas, the compressor, combustion chamber, and turbine always contribute 18.96%, 53.02%, and 28%, respectively, to the gas turbine’s exergoeconomic operation costs. The application of this methodology can be extended to other simple gas turbines using the pressure drops and isentropic efficiencies, among others, as the degradation parameters, as well as to other energetic systems, without loss of generality.

  14. Parametric Analysis of Surveillance Quality and Level and Quality of Intent Information and Their Impact on Conflict Detection Performance

    Science.gov (United States)

    Guerreiro, Nelson M.; Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.; Lewis, Timothy A.

    2016-01-01

    A loss-of-separation (LOS) is said to occur when two aircraft are spatially too close to one another. A LOS is the fundamental unsafe event to be avoided in air traffic management and conflict detection (CD) is the function that attempts to predict these LOS events. In general, the effectiveness of conflict detection relates to the overall safety and performance of an air traffic management concept. An abstract, parametric analysis was conducted to investigate the impact of surveillance quality, level of intent information, and quality of intent information on conflict detection performance. The data collected in this analysis can be used to estimate the conflict detection performance under alternative future scenarios or alternative allocations of the conflict detection function, based on the quality of the surveillance and intent information under those conditions.Alternatively, this data could also be used to estimate the surveillance and intent information quality required to achieve some desired CD performance as part of the design of a new separation assurance system.

  15. Investigation of olfactory function in normal volunteers by Tc-99m ECD Brain SPECT: Analysis using statistical parametric mapping

    International Nuclear Information System (INIS)

    Chung, Y.A.; Kim, S.H.; Park, Y.H.; Lee, S.Y.; Sohn, H.S.; Chung, S.K.

    2002-01-01

    The purpose of this study was to investigate olfactory function according to Tc-99m ECD uptake pattern in brain perfusion SPET of normal volunteer by means of statistical parametric mapping (SPM) analysis. The study population was 8 healthy volunteer subjects (M:F = 6:2, age range: 22-54 years, mean 34 years). We performed baseline brain perfusion SPET using 555 MBq of Tc-99m ECD in a silent dark room. Two hours later, we obtained brain perfusion SPET using 1110 MBq of Tc-99m ECD after 3% butanol solution under the same condition. All SPET images were spatially transformed to standard space smoothed and globally normalized. The differences between the baseline and odor-identification SPET images were statistically analyzed using SPM-99 software. The difference between two sets of brain perfusion SPET was considered significant at a threshold of uncorrected p values less than 0.01. SPM analysis revealed significant hyper-perfusion in both cingulated gyri, right middle temporal gyrus, right superior and inferior frontal gyri, right lingual gyrus and right fusiform gyrus on odor-identification SPET. This study shows that brain perfusion SPET can securely support other diagnostic techniques in the evaluation of olfactory function

  16. Parametric nanomechanical amplification at very high frequency.

    Science.gov (United States)

    Karabalin, R B; Feng, X L; Roukes, M L

    2009-09-01

    Parametric resonance and amplification are important in both fundamental physics and technological applications. Here we report very high frequency (VHF) parametric resonators and mechanical-domain amplifiers based on nanoelectromechanical systems (NEMS). Compound mechanical nanostructures patterned by multilayer, top-down nanofabrication are read out by a novel scheme that parametrically modulates longitudinal stress in doubly clamped beam NEMS resonators. Parametric pumping and signal amplification are demonstrated for VHF resonators up to approximately 130 MHz and provide useful enhancement of both resonance signal amplitude and quality factor. We find that Joule heating and reduced thermal conductance in these nanostructures ultimately impose an upper limit to device performance. We develop a theoretical model to account for both the parametric response and nonequilibrium thermal transport in these composite nanostructures. The results closely conform to our experimental observations, elucidate the frequency and threshold-voltage scaling in parametric VHF NEMS resonators and sensors, and establish the ultimate sensitivity limits of this approach.

  17. Parametric and Non-Parametric System Modelling

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg

    1999-01-01

    the focus is on combinations of parametric and non-parametric methods of regression. This combination can be in terms of additive models where e.g. one or more non-parametric term is added to a linear regression model. It can also be in terms of conditional parametric models where the coefficients...... considered. It is shown that adaptive estimation in conditional parametric models can be performed by combining the well known methods of local polynomial regression and recursive least squares with exponential forgetting. The approach used for estimation in conditional parametric models also highlights how...... networks is included. In this paper, neural networks are used for predicting the electricity production of a wind farm. The results are compared with results obtained using an adaptively estimated ARX-model. Finally, two papers on stochastic differential equations are included. In the first paper, among...

  18. NPV Sensitivity Analysis: A Dynamic Excel Approach

    Science.gov (United States)

    Mangiero, George A.; Kraten, Michael

    2017-01-01

    Financial analysts generally create static formulas for the computation of NPV. When they do so, however, it is not readily apparent how sensitive the value of NPV is to changes in multiple interdependent and interrelated variables. It is the aim of this paper to analyze this variability by employing a dynamic, visually graphic presentation using…

  19. Sensitivity Analysis for Multidisciplinary Systems (SAMS)

    Science.gov (United States)

    2016-12-01

    release. Distribution is unlimited. 14 Server and Client Code Server from geometry import Point, Geometry import math import zmq class Server...public release; Distribution is unlimited. DISTRIBUTION STATEMENT A: Approved for public release. Distribution is unlimited. 19 Example Application Boeing...Materials Conference, 2011. Cross, D. M., Local continuum sensitivity method for shape design derivatives using spatial gradient reconstruction. Diss

  20. Non-parametric analysis of technical efficiency: factors affecting efficiency of West Java rice farms

    Czech Academy of Sciences Publication Activity Database

    Brázdik, František

    -, č. 286 (2006), s. 1-45 ISSN 1211-3298 R&D Projects: GA MŠk LC542 Institutional research plan: CEZ:AV0Z70850503 Keywords : rice farms * data envelopment analysis Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp286.pdf

  1. Extended forward sensitivity analysis of one-dimensional isothermal flow

    International Nuclear Information System (INIS)

    Johnson, M.; Zhao, H.

    2013-01-01

    Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)

  2. Parametric design and off-design analysis of organic Rankine cycle (ORC) system

    International Nuclear Information System (INIS)

    Song, Jian; Gu, Chun-wei; Ren, Xiaodong

    2016-01-01

    Highlights: • A one-dimensional analysis method for ORC system is proposed. • The system performance under both design and off-design conditions are analyzed. • The working fluid selection is based on both design and off-design performance. • The system parameter determination are based on both design and off-design performance. - Abstract: A one-dimensional analysis method has been proposed for the organic Rankine cycle (ORC) system in this paper. The method contains two main parts: a one-dimensional aerodynamic analysis model of the radial-inflow turbine and a performance prediction model of the heat exchanger. Based on the present method, an ORC system for the industrial waste heat recovery is designed and analyzed. The net power output of the ORC system is 534 kW, and the thermal efficiency reaches 13.5%. System performance under off-design conditions is simulated and considered. The results show that the inlet temperatures of the heat source and the cooling water have a significant influence on the system. With the increment of the heat source inlet temperature, the mass flow rate of the working fluid, the net power output and the heat utilization ratio of the ORC system increase. While, the system thermal efficiency decreases with increasing cooling water inlet temperature. In order to maintain the condensation pressure at a moderate value, the heat source inlet temperature considered in this analysis should be kept within the range of 443.15–468.15 K, while the optimal temperature range of the cooling water is between 283.15 K and 303.15 K.

  3. Whole brain analysis of postmortem density changes of grey and white matter on computed tomography by statistical parametric mapping

    Energy Technology Data Exchange (ETDEWEB)

    Nishiyama, Yuichi; Mori, Hiroshi; Katsube, Takashi; Kitagaki, Hajime [Shimane University Faculty of Medicine, Department of Radiology, Izumo-shi, Shimane (Japan); Kanayama, Hidekazu; Tada, Keiji; Yamamoto, Yasushi [Shimane University Hospital, Department of Radiology, Izumo-shi, Shimane (Japan); Takeshita, Haruo [Shimane University Faculty of Medicine, Department of Legal Medicine, Izumo-shi, Shimane (Japan); Kawakami, Kazunori [Fujifilm RI Pharma, Co., Ltd., Tokyo (Japan)

    2017-06-15

    This study examined the usefulness of statistical parametric mapping (SPM) for investigating postmortem changes on brain computed tomography (CT). This retrospective study included 128 patients (23 - 100 years old) without cerebral abnormalities who underwent unenhanced brain CT before and after death. The antemortem CT (AMCT) scans and postmortem CT (PMCT) scans were spatially normalized using our original brain CT template, and postmortem changes of CT values (in Hounsfield units; HU) were analysed by the SPM technique. Compared with AMCT scans, 58.6 % and 98.4 % of PMCT scans showed loss of the cerebral sulci and an unclear grey matter (GM)-white matter (WM) interface, respectively. SPM analysis revealed a significant decrease in cortical GM density within 70 min after death on PMCT scans, suggesting cytotoxic brain oedema. Furthermore, there was a significant increase in the density of the WM, lenticular nucleus and thalamus more than 120 min after death. The SPM technique demonstrated typical postmortem changes on brain CT scans, and revealed that the unclear GM-WM interface on early PMCT scans is caused by a rapid decrease in cortical GM density combined with a delayed increase in WM density. SPM may be useful for assessment of whole brain postmortem changes. (orig.)

  4. Performance Analysis of a Hybrid Raman Optical Parametric Amplifier in the O- and E-Bands for CWDM PONs

    Directory of Open Access Journals (Sweden)

    Sasanthi Peiris

    2014-12-01

    Full Text Available We describe a hybrid Raman-optical parametric amplifier (HROPA operating at the O- and E-bands and designed for coarse wavelength division multiplexed (CWDM passive optical networks (PONs. We present the mathematical model and simulation results for the optimization of this HROPA design. Our analysis shows that separating the two amplification processes allows for optimization of each one separately, e.g., proper selection of pump optical powers and wavelengths to achieve maximum gain bandwidth and low gain ripple. Furthermore, we show that the proper design of optical filters incorporated in the HROPA architecture can suppress idlers generated during the OPA process, as well as other crosstalk that leaks through the passive optical components. The design approach enables error free performance for all nine wavelengths within the low half of the CWDM band, assigned to upstream traffic in a CWDM PON architecture, for all possible transmitter wavelength misalignments (±6 nm from the center wavelength of the channel band. We show that the HROPA can achieve error-free performance with a 170-nm gain bandwidth (e.g., 1264 nm–1436 nm, a gain of >20 dB and a gain ripple of <4 dB.

  5. Whole brain analysis of postmortem density changes of grey and white matter on computed tomography by statistical parametric mapping

    International Nuclear Information System (INIS)

    Nishiyama, Yuichi; Mori, Hiroshi; Katsube, Takashi; Kitagaki, Hajime; Kanayama, Hidekazu; Tada, Keiji; Yamamoto, Yasushi; Takeshita, Haruo; Kawakami, Kazunori

    2017-01-01

    This study examined the usefulness of statistical parametric mapping (SPM) for investigating postmortem changes on brain computed tomography (CT). This retrospective study included 128 patients (23 - 100 years old) without cerebral abnormalities who underwent unenhanced brain CT before and after death. The antemortem CT (AMCT) scans and postmortem CT (PMCT) scans were spatially normalized using our original brain CT template, and postmortem changes of CT values (in Hounsfield units; HU) were analysed by the SPM technique. Compared with AMCT scans, 58.6 % and 98.4 % of PMCT scans showed loss of the cerebral sulci and an unclear grey matter (GM)-white matter (WM) interface, respectively. SPM analysis revealed a significant decrease in cortical GM density within 70 min after death on PMCT scans, suggesting cytotoxic brain oedema. Furthermore, there was a significant increase in the density of the WM, lenticular nucleus and thalamus more than 120 min after death. The SPM technique demonstrated typical postmortem changes on brain CT scans, and revealed that the unclear GM-WM interface on early PMCT scans is caused by a rapid decrease in cortical GM density combined with a delayed increase in WM density. SPM may be useful for assessment of whole brain postmortem changes. (orig.)

  6. Multi-Parametric MRI and Texture Analysis to Visualize Spatial Histologic Heterogeneity and Tumor Extent in Glioblastoma.

    Science.gov (United States)

    Hu, Leland S; Ning, Shuluo; Eschbacher, Jennifer M; Gaw, Nathan; Dueck, Amylou C; Smith, Kris A; Nakaji, Peter; Plasencia, Jonathan; Ranjbar, Sara; Price, Stephen J; Tran, Nhan; Loftus, Joseph; Jenkins, Robert; O'Neill, Brian P; Elmquist, William; Baxter, Leslie C; Gao, Fei; Frakes, David; Karis, John P; Zwart, Christine; Swanson, Kristin R; Sarkaria, Jann; Wu, Teresa; Mitchell, J Ross; Li, Jing

    2015-01-01

    Genetic profiling represents the future of neuro-oncology but suffers from inadequate biopsies in heterogeneous tumors like Glioblastoma (GBM). Contrast-enhanced MRI (CE-MRI) targets enhancing core (ENH) but yields adequate tumor in only ~60% of cases. Further, CE-MRI poorly localizes infiltrative tumor within surrounding non-enhancing parenchyma, or brain-around-tumor (BAT), despite the importance of characterizing this tumor segment, which universally recurs. In this study, we use multiple texture analysis and machine learning (ML) algorithms to analyze multi-parametric MRI, and produce new images indicating tumor-rich targets in GBM. We recruited primary GBM patients undergoing image-guided biopsies and acquired pre-operative MRI: CE-MRI, Dynamic-Susceptibility-weighted-Contrast-enhanced-MRI, and Diffusion Tensor Imaging. Following image coregistration and region of interest placement at biopsy locations, we compared MRI metrics and regional texture with histologic diagnoses of high- vs low-tumor content (≥80% vs heterogeneity to identify regional tumor-rich biopsy targets.

  7. Single photon emission computed tomography and statistical parametric mapping analysis in cirrhotic patients with and without minimal hepatic encephalopathy

    International Nuclear Information System (INIS)

    Nakagawa, Yuri; Matsumura, Kaname; Iwasa, Motoh; Kaito, Masahiko; Adachi, Yukihiko; Takeda, Kan

    2004-01-01

    The early diagnosis and treatment of cognitive impairment in cirrhotic patients is needed to improve the patients' daily living. In this study, alterations of regional cerebral blood flow (rCBF) were evaluated in cirrhotic patients using statistical parametric mapping (SPM). The relationships between rCBF and neuropsychological test, severity of disease and biochemical data were also assessed. 99m Tc-ethyl cysteinate dimer single photon emission computed tomography was performed in 20 patients with non-alcoholic liver cirrhosis without overt hepatic encephalopathy (HE) and in 20 age-matched healthy subjects. Neuropsychological tests were performed in 16 patients; of these 7 had minimal HE. Regional CBF images were also analyzed in these groups using SPM. On SPM analysis, cirrhotic patients showed regions of significant hypoperfusion in the superior and middle frontal gyri, and inferior parietal lobules compared with the control group. These areas included parts of the premotor and parietal associated areas of the cortex. Among the cirrhotic patients, those with minimal HE had regions of significant hypoperfusion in the cingulate gyri bilaterally as compared with those without minimal HE. Abnormal function in the above regions may account for the relatively selective neuropsychological deficits in the cognitive status of patients with cirrhosis. These findings may be important in the identification and management of cirrhotic patients with minimal HE. (author)

  8. Investigation of olfactory function in normal volunteers and patients with anosmia : analysis of brain perfusion SPECTs using statistical parametric mapping

    International Nuclear Information System (INIS)

    Chung, Y. A.; Kim, S. H.; Sohn, H. S.; Chung, S. K.

    2002-01-01

    The purpose of this study was to investigate olfactory function with Tc-99m ECD brain perfusion SPECT using statistical parametric mapping (SPM) analysis in normal volunteers and patients with anosmia. The study populations were 8 subjects matched healthy volunteers and 16 subjects matched patients with anosmia. We obtaibed baseline and post-stimulation (3% butanol) brain perfusion SPECTs in the silent dark room. We analyzed the all SPECTs using SPM. The difference between two sets of brain perfusion SPECTs were compared with t-test. The voxels with p-value of less than 0.01 were considered to be significantly different. We demonstrated increased perfusion in the both cingulated gyri, right middle temporal gyrus, right superior and inferior frontal gyri, right lingual gyrus and right fusiform gyrus on post-stimulation brain SPECT in normal volunteers, and demonstrated decreased perfusion in the both cingulate gyri, right middle temporal gyrus, right rectal gyrus and both superior and inferior frontal gyri in the 10 patients with anosmia. No significant hypoperfusion area was observed in the other 6 patients with anosmia. The baseline and post-stimulation brain perfusion SPECTs can helpful in the evaluation of olfactory function and be useful in the diagnosis of anosmia

  9. Investigation of olfactory function in normal volunteers and patients with anosmia : analysis of brain perfusion SPECTs using statistical parametric mapping

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Y. A.; Kim, S. H.; Sohn, H. S.; Chung, S. K. [Catholic University College of Medicine, Seoul (Korea, Republic of)

    2002-07-01

    The purpose of this study was to investigate olfactory function with Tc-99m ECD brain perfusion SPECT using statistical parametric mapping (SPM) analysis in normal volunteers and patients with anosmia. The study populations were 8 subjects matched healthy volunteers and 16 subjects matched patients with anosmia. We obtaibed baseline and post-stimulation (3% butanol) brain perfusion SPECTs in the silent dark room. We analyzed the all SPECTs using SPM. The difference between two sets of brain perfusion SPECTs were compared with t-test. The voxels with p-value of less than 0.01 were considered to be significantly different. We demonstrated increased perfusion in the both cingulated gyri, right middle temporal gyrus, right superior and inferior frontal gyri, right lingual gyrus and right fusiform gyrus on post-stimulation brain SPECT in normal volunteers, and demonstrated decreased perfusion in the both cingulate gyri, right middle temporal gyrus, right rectal gyrus and both superior and inferior frontal gyri in the 10 patients with anosmia. No significant hypoperfusion area was observed in the other 6 patients with anosmia. The baseline and post-stimulation brain perfusion SPECTs can helpful in the evaluation of olfactory function and be useful in the diagnosis of anosmia.

  10. Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach

    Science.gov (United States)

    Chowdhury, R.; Adhikari, S.

    2012-10-01

    Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.

  11. The role of sensitivity analysis in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Hirschberg, S.; Knochenhauer, M.

    1987-01-01

    The paper describes several items suitable for close examination by means of application of sensitivity analysis, when performing a level 1 PSA. Sensitivity analyses are performed with respect to; (1) boundary conditions, (2) operator actions, and (3) treatment of common cause failures (CCFs). The items of main interest are identified continuously in the course of performing a PSA, as well as by scrutinising the final results. The practical aspects of sensitivity analysis are illustrated by several applications from a recent PSA study (ASEA-ATOM BWR 75). It is concluded that sensitivity analysis leads to insights important for analysts, reviewers and decision makers. (orig./HP)

  12. Automated sensitivity analysis using the GRESS language

    International Nuclear Information System (INIS)

    Pin, F.G.; Oblow, E.M.; Wright, R.Q.

    1986-04-01

    An automated procedure for performing large-scale sensitivity studies based on the use of computer calculus is presented. The procedure is embodied in a FORTRAN precompiler called GRESS, which automatically processes computer models and adds derivative-taking capabilities to the normal calculated results. In this report, the GRESS code is described, tested against analytic and numerical test problems, and then applied to a major geohydrological modeling problem. The SWENT nuclear waste repository modeling code is used as the basis for these studies. Results for all problems are discussed in detail. Conclusions are drawn as to the applicability of GRESS in the problems at hand and for more general large-scale modeling sensitivity studies

  13. Preliminary analysis of K-DEMO thermal hydraulic system using MELCOR; Parametric study of hydrogen explosion

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Sung Bo; Lim, Soo Min; Bang, In Cheol [UNIST, Ulsan (Korea, Republic of)

    2016-10-15

    K-DEMO (Korean fusion demonstration reactor) is future reactor for the commercializing the fusion power generation. The Design of K-DEMO is similar to that of ITER but the fusion energy generation is much bigger because ITER is experimental reactor. For this reason, K-DEMO uses more fusion reaction with bigger amount of tritium. Higher fusion power means more neutron generation that can irradiate the structure around fusion plasma. Fusion reactor can produce many kinds of radioactive material in the accident. Because of this hazard, preliminary safety analysis is mandatory before its construction. Concern for safety problem of accident of fusion/fission reactor has been growing after Fukushima accident which is severe accident from unexpected disaster. To model the primary heat transfer system, in this study, MARS-KS thermal hydraulic analysis is referred. Lee et al. and Kim et al. conducted thermal hydraulic analysis using MARS-KS and multiple module simulation to deal with the phenomena of first wall corrosion for each plasma pulse. This study shows the relationship between vacuum vessel rupture area and source term leakage after hydrogen explosion. For the conservative study, first wall heating is not terminated because the heating inside the vacuum vessel increase the pressure inside VV. Pressurizer, steam generator and turbine is not damaged. 6.69 kg of tritiated water (HTO) and 1 ton of dust is modeled which is ITER guideline. The entire system of K-DEMO is smaller than that of ITER. For this reason, lots of aerosol is release into environment although the safety system like DS is maintained. This result shows that the safety system of K-DEMO should use much more safety system.

  14. Non-parametric trend analysis of the aridity index for three large arid and semi-arid basins in Iran

    Science.gov (United States)

    Ahani, Hossien; Kherad, Mehrzad; Kousari, Mohammad Reza; van Roosmalen, Lieke; Aryanfar, Ramin; Hosseini, Seyyed Mashaallah

    2013-05-01

    Currently, an important scientific challenge that researchers are facing is to gain a better understanding of climate change at the regional scale, which can be especially challenging in an area with low and highly variable precipitation amounts such as Iran. Trend analysis of the medium-term change using ground station observations of meteorological variables can enhance our knowledge of the dominant processes in an area and contribute to the analysis of future climate projections. Generally, studies focus on the long-term variability of temperature and precipitation and to a lesser extent on other important parameters such as moisture indices. In this study the recent 50-year trends (1955-2005) of precipitation (P), potential evapotranspiration (PET), and aridity index (AI) in monthly time scale were studied over 14 synoptic stations in three large Iran basins using the Mann-Kendall non-parametric test. Additionally, an analysis of the monthly, seasonal and annual trend of each parameter was performed. Results showed no significant trends in the monthly time series. However, PET showed significant, mostly decreasing trends, for the seasonal values, which resulted in a significant negative trend in annual PET at five stations. Significant negative trends in seasonal P values were only found at a number of stations in spring and summer and no station showed significant negative trends in annual P. Due to the varied positive and negative trends in annual P and to a lesser extent PET, almost as many stations with negative as positive trends in annual AI were found, indicating that both drying and wetting trends occurred in Iran. Overall, the northern part of the study area showed an increasing trend in annual AI which meant that the region became wetter, while the south showed decreasing trends in AI.

  15. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...

  16. Parametric study on single shot peening by dimensional analysis method incorporated with finite element method

    Science.gov (United States)

    Wu, Xian-Qian; Wang, Xi; Wei, Yan-Peng; Song, Hong-Wei; Huang, Chen-Guang

    2012-06-01

    Shot peening is a widely used surface treatment method by generating compressive residual stress near the surface of metallic materials to increase fatigue life and resistance to corrosion fatigue, cracking, etc. Compressive residual stress and dent profile are important factors to evaluate the effectiveness of shot peening process. In this paper, the influence of dimensionless parameters on maximum compressive residual stress and maximum depth of the dent were investigated. Firstly, dimensionless relations of processing parameters that affect the maximum compressive residual stress and the maximum depth of the dent were deduced by dimensional analysis method. Secondly, the influence of each dimensionless parameter on dimensionless variables was investigated by the finite element method. Furthermore, related empirical formulas were given for each dimensionless parameter based on the simulation results. Finally, comparison was made and good agreement was found between the simulation results and the empirical formula, which shows that a useful approach is provided in this paper for analyzing the influence of each individual parameter.

  17. Analysis of a continuous-variable quadripartite cluster state from a single optical parametric oscillator

    International Nuclear Information System (INIS)

    Midgley, S. L. W.; Olsen, M. K.; Bradley, A. S.; Pfister, O.

    2010-01-01

    We examine the feasibility of generating continuous-variable multipartite entanglement in an intracavity concurrent downconversion scheme that has been proposed for the generation of cluster states by Menicucci et al. [Phys. Rev. Lett. 101, 130501 (2008)]. By calculating optimized versions of the van Loock-Furusawa correlations we demonstrate genuine quadripartite entanglement and investigate the degree of entanglement present. Above the oscillation threshold the basic cluster state geometry under consideration suffers from phase diffusion. We alleviate this problem by incorporating a small injected signal into our analysis. Finally, we investigate squeezed joint operators. While the squeezed joint operators approach zero in the undepleted regime, we find that this is not the case when we consider the full interaction Hamiltonian and the presence of a cavity. In fact, we find that the decay of these operators is minimal in a cavity, and even depletion alone inhibits cluster state formation.

  18. Parametric Analysis of PWR Spent Fuel Depletion Parameters for Long-Term-Disposal Criticality Safety

    International Nuclear Information System (INIS)

    DeHart, M.D.

    1999-01-01

    Utilization of burnup credit in criticality safety analysis for long-term disposal of spent nuclear fuel allows improved design efficiency and reduced cost due to the large mass of fissile material that will be present in the repository. Burnup-credit calculations are based on depletion calculations that provide a conservative estimate of spent fuel contents (in terms of criticality potential), followed by criticality calculations to assess the value of the effective neutron multiplication factor (k(sub)eff) for the a spent fuel cask or a fuel configuration under a variety of probabilistically derived events. In order to ensure that the depletion calculation is conservative, it is necessary to both qualify and quantify assumptions that can be made in depletion models

  19. Evaluation of endogenous contamination using hair as biomonitor by K0 parametric neutron activation analysis

    International Nuclear Information System (INIS)

    Menezes, Maria Angela de B.C.; Neves, Otaviano F.; Batista, Jose R.; Maia, Elene Cristina P.

    2000-01-01

    The environment of the work is an important source of pollutant exposition to human beings. The main goal of this paper is to make a survey of the exposures to metals related to occupational diseases. The hair samples as biomonitors were donated by galvanising factory workers in Belo Horizonte. The samples for a Comparative Group were collected from individuals not exposed to a specific environment. The k 0 -neutron activation analysis was applied on the elemental determination. The Comparative Group presented no significant difference compared to literature. Therefore the very high values exhibited by the Workers' Group suggest endogenous contamination. The GBW09101 'Human Hair' Shangai Institute of Nuclear Research, China, Reference Material was also evaluated presenting good agreement to certified and information values. The elements Ag, Al, As Au, Br, Cl, Co, Cr, Cu, Fe, Hf, Hg, K, Mn, Na, Sb, Sc, Ta, Ti, V and Zn were determined. (author)

  20. Modeling and parametric analysis of hollow fiber membrane system for carbon capture from multicomponent flue gas

    KAUST Repository

    Khalilpour, Rajab

    2011-08-12

    The modeling and optimal design/operation of gas membranes for postcombustion carbon capture (PCC) is presented. A systematic methodology is presented for analysis of membrane systems considering multicomponent flue gas with CO 2 as target component. Simplifying assumptions is avoided by namely multicomponent flue gas represented by CO 2/N 2 binary mixture or considering the co/countercurrent flow pattern of hollow-fiber membrane system as mixed flow. Optimal regions of flue gas pressures and membrane area were found within which a technoeconomical process system design could be carried out. High selectivity was found to not necessarily have notable impact on PCC membrane performance, rather, a medium selectivity combined with medium or high permeance could be more advantageous. © 2011 American Institute of Chemical Engineers (AIChE).

  1. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics

    Science.gov (United States)

    Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.

    2015-01-01

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208

  2. A critique of non-parametric efficiency analysis in energy economics studies

    International Nuclear Information System (INIS)

    Chen, Chien-Ming

    2013-01-01

    The paper reexamines non-additive environmental efficiency models with weakly-disposable undesirable outputs appeared in the literature of energy economics. These efficiency models are used in numerous studies published in this journal and other energy-related outlets. Recent studies, however, have found key limitations of the weak-disposability assumption in its application to environmental efficiency analysis. It is found that efficiency scores obtained from non-additive efficiency models can be non-monotonic in pollution quantities under the weak-disposability assumption — which is against common intuition and the principle of environmental economics. In this paper, I present taxonomy of efficiency models found in the energy economics literature and illustrate the above limitations and discuss implications of monotonicity from a practical viewpoint. Finally, I review the formulations for a variable returns-to-scale technology with weakly-disposable undesirable outputs, which has been misused in a number of papers in the energy economics literature. An application to evaluating the energy efficiencies of 23 European Union states is presented to illustrate the problem. - Highlights: • Review different environmental efficiency model used in energy economics studies • Highlight limitations of these environmental efficiency models • These limitations have not been recognized in the existing energy economics literature. • Data from 23 European Union states are used to illustrate the methodological consequences

  3. Power line conductor icing prevention by the Joule effect : parametric analysis and energy requirements

    Energy Technology Data Exchange (ETDEWEB)

    Peter, Z.; Farzaneh, M.; Kiss, L.I. [Quebec Univ., Chicoutimi, PQ (Canada). Industrial Chair on Atmospheric Icing of Power Network Equipment

    2005-07-01

    A mathematical model to calculate the minimum current intensity needed to prevent potentially damaging ice accretion on power line conductors was presented. The influence of atmospheric parameters such as wind speed, air temperature and liquid water were considered. Energy analysis was developed for an aluminum and steel reinforced conductor with circular cylindrical wire and concentric layers. Atmospheric parameters and the duration of the freezing conditions were considered with reference to the Joule effect. The model was then compared with experiments and simulations performed at an icing wind tunnel and in a climate room. It was determined that the equivalent thermal conductivity of the conductor should be assessed to identify the temperature distribution in the power line conductor. The radial component of the thermal conductivity was estimated on the basis of experiments performed in the wind tunnel, which provided a good estimation of the equivalent thermal conductivity and overall heat transfer coefficient around the stranded conductor. Experimental results were compared with values obtained from theoretically equivalent conductivity models. It was observed that the convective heat transfer coefficients around stranded conductors were higher than around smooth cylinders, and that the mathematical calculations slightly overestimated the wind tunnel measurements due to difficulties in estimating the wetted surface and the overall convection heat transfer coefficient around a stranded conductor. The typical range for the equivalent thermal conductivity of stranded conductors was also presented. 13 refs., 1 tab., 11 figs.

  4. Parametric time series analysis of geoelectrical signals: an application to earthquake forecasting in Southern Italy

    Directory of Open Access Journals (Sweden)

    V. Tramutoli

    1996-06-01

    Full Text Available An autoregressive model was selected to describe geoelectrical time series. An objective technique was subsequently applied to analyze and discriminate values above (below an a priorifixed threshold possibly related to seismic events. A complete check of the model and the main guidelines to estimate the occurrence probability of extreme events are reported. A first application of the proposed technique is discussed through the analysis of the experimental data recorded by an automatic station located in Tito, a small town on the Apennine chain in Southern Italy. This region was hit by the November 1980 Irpinia-Basilicata earthquake and it is one of most active areas of the Mediterranean region. After a preliminary filtering procedure to reduce the influence of external parameters (i.e. the meteo-climatic effects, it was demonstrated that the geoelectrical residual time series are well described by means of a second order autoregressive model. Our findings outline a statistical methodology to evaluate the efficiency of electrical seismic precursors.

  5. Parametric analysis and design of a screw extruder for slightly non-Newtonian (pseudoplastic materials

    Directory of Open Access Journals (Sweden)

    J.I. Orisaleye

    2018-04-01

    Full Text Available Extruders have found application in the food, polymer and pharmaceutical industries. Rheological characteristics of materials are important in the specification of design parameters of screw extruders. Biopolymers, which consist of proteins, nucleic acids and polysaccharides, are shear-thinning (pseudoplastic within normal operating ranges. However, analytical models to predict and design screw extruders for non-Newtonian pseudoplastic materials are rare. In this study, an analytical model suitable to design a screw extruder for slightly non-Newtonian materials was developed. The model was used to predict the performance of the screw extruder while processing materials with power law indices slightly deviating from unity (the Newtonian case. Using non-dimensional analysis, the effects of design and operational parameters were investigated. Expressions to determine the optimum channel depth and helix angle were also derived. The model is capable of predicting the performance of the screw extruder within the range of power law indices considered (1/2⩽n⩽1. The power law index influences the choice of optimum channel depth and helix angle of the screw extruder. Keywords: Screw extruder, Slightly non-Newtonian, Shear-thinning, Pseudoplastic, Biopolymer, Power law

  6. Parametric analysis of the thermodynamic properties for a medium with strong interaction between particles

    International Nuclear Information System (INIS)

    Dubovitskii, V.A.; Pavlov, G.A.; Krasnikov, Yu.G.

    1996-01-01

    Thermodynamic analysis of media with strong interparticle (Coulomb) interaction is presented. A method for constructing isotherms is proposed for a medium described by a closed multicomponent thermodynamic model. The method is based on choosing an appropriate nondegenerate frame of reference in the extended space of thermodynamic variables and provides efficient thermodynamic calculations in a wide range of parameters, for an investigation of phase transitions of the first kind, and for determining both the number of phases and coexistence curves. A number of approximate thermodynamic models of hydrogen plasma are discussed. The approximation corresponding to the n5/2 law, in which the effects of particle attraction and repulsion are taken into account qualitatively, is studied. This approximation allows studies of thermodynamic properties of a substance for a wide range of parameters. In this approximation, for hydrogen at a constant temperature, various properties of the degree of ionization are revealed. In addition, the parameters of the second critical point are found under conditions corresponding to the Jovian interior

  7. Proposing a framework for airline service quality evaluation using Type-2 Fuzzy TOPSIS and non-parametric analysis

    Directory of Open Access Journals (Sweden)

    Navid Haghighat

    2017-12-01

    Full Text Available This paper focuses on evaluating airline service quality from the perspective of passengers' view. Until now a lot of researches has been performed in airline service quality evaluation in the world but a little research has been conducted in Iran, yet. In this study, a framework for measuring airline service quality in Iran is proposed. After reviewing airline service quality criteria, SSQAI model was selected because of its comprehensiveness in covering airline service quality dimensions. SSQAI questionnaire items were redesigned to adopt with Iranian airlines requirements and environmental circumstances in the Iran's economic and cultural context. This study includes fuzzy decision-making theory, considering the possible fuzzy subjective judgment of the evaluators during airline service quality evaluation. Fuzzy TOPSIS have been applied for ranking airlines service quality performances. Three major Iranian airlines which have the most passenger transfer volumes in domestic and foreign flights were chosen for evaluation in this research. Results demonstrated Mahan airline has got the best service quality performance rank in gaining passengers' satisfaction with delivery of high-quality services to its passengers, among the three major Iranian airlines. IranAir and Aseman airlines placed in the second and third rank, respectively, according to passenger's evaluation. Statistical analysis has been used in analyzing passenger responses. Due to the abnormality of data, Non-parametric tests were applied. To demonstrate airline ranks in every criterion separately, Friedman test was performed. Variance analysis and Tukey test were applied to study the influence of increasing in age and educational level of passengers on degree of their satisfaction from airline's service quality. Results showed that age has no significant relation to passenger satisfaction of airlines, however, increasing in educational level demonstrated a negative impact on

  8. Elastic full-waveform inversion and parametrization analysis applied to walk-away vertical seismic profile data for unconventional (heavy oil) reservoir characterization

    Science.gov (United States)

    Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu

    2018-06-01

    Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter trade-off, arising from the simultaneous variations of different physical parameters, which increase the nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parametrization and acquisition arrangement. An appropriate choice of model parametrization is important to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parametrizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) data for unconventional heavy oil reservoir characterization. Six model parametrizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^' }) and velocity-impedance-II (α″, β″ and I_S^' }). We begin analysing the interparameter trade-off by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. We discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter trade-offs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter trade-offs for various model parametrizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parametrization, the inverted density

  9. Parametric and quantitative analysis of MR renographic curves for assessing the functional behaviour of the kidney

    Energy Technology Data Exchange (ETDEWEB)

    Michoux, N.; Montet, X.; Pechere, A.; Ivancevic, M.K.; Martin, P.-Y.; Keller, A.; Didier, D.; Terrier, F.; Vallee, J.-P

    2005-04-01

    The aim of this study was to refine the description of the renal function based on MR images and through transit-time curve analysis on a normal population and on a population with renal failure, using the quantitative model of the up-slope. Thirty patients referred for a kidney MR exam were divided in a first population with well-functioning kidneys and in a second population with renal failure from ischaemic kidney disease. The perfusion sequence consisted of an intravenous injection of Gd-DTPA and of a fast GRE sequence T1-TFE with 90 deg. magnetisation preparation (Intera 1.5 T MR System, Philips Medical System). To convert the signal intensity into 1/T1, which is proportional to the contrast media concentration, a flow-corrected calibration procedure was used. Following segmentation of regions of interest in the cortex and medulla of the kidney and in the abdominal aorta, outflow curves were obtained and filtered to remove the high frequency fluctuations. The model of the up-slope method was then applied. Significant reduction of the cortical perfusion (Q{sub c}=0.057{+-}0.030 ml/(s 100 g) to Q{sub c}=0.030{+-}0.017 ml/(s 100 g), P<0.013), of the medullary perfusion (Q{sub m}=0.023{+-}0.018 ml/(s 100 g) to Q{sub m}=0.011{+-}0.006 ml/(s 100 g), P<0.046) and of the accumulation of contrast media in the medulla (Q{sub a}=0.005{+-}0.003 ml/(s 100 g) to Q{sub a}=0.0009{+-}0.0008 ml/(s 100 g), P<0.001) were found in presence of renal failure. High correlations were found between the creatinine level and the accumulation Q{sub a} in the medulla (r{sup 2}=0.72, P<0.05), and between the perfusion ratio Q{sub c}/Q{sub m} and the accumulation Q{sub a} in the medulla (r{sup 2}=0.81, P<0.05). No significant difference was found in times to peak between both populations despite a trend showing T{sub a} the time to the end of the increasing contrast accumulation period in the medulla, arriving later for renal failure. Advances in MR signal calibration with the building of

  10. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    Science.gov (United States)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  11. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  12. Parametric exergy analysis of a tubular Solid Oxide Fuel Cell (SOFC) stack through finite-volume model

    International Nuclear Information System (INIS)

    Calise, F.; Ferruzzi, G.; Vanoli, L.

    2009-01-01

    This paper presents a very detailed local exergy analysis of a tubular Solid Oxide Fuel Cell (SOFC) stack. In particular, a complete parametric analysis has been carried out, in order to assess the effects of the synthesis/design parameters on the local irreversibilities in the components of the stack. A finite-volume axial-symmetric model of the tubular internal reforming Solid Oxide Fuel Cell stack under investigation has been used. The stack consists of: SOFC tubes, tube-in-tube pre-reformer and tube and shell catalytic burner. The model takes into account the effects of heat/mass transfer and chemical/electrochemical reactions. The model allows one to predict the performance of a SOFC stack once a series of design and operative parameters are fixed, but also to investigate the source and localization of inefficiency. To this scope, an exergy analysis was implemented. The SOFC tube, the pre-reformer and the catalytic burner are discretized along their longitudinal axes. Detailed models of the kinetics of the reforming, catalytic combustion and electrochemical reactions are implemented. Pressure drops, convection heat transfer and overvoltages are calculated on the basis of the work previously developed by the authors. The heat transfer model includes the contribution of thermal radiation, so improving the models previously used by the authors. Radiative heat transfer is calculated on the basis of the slice-to-slice configuration factors and corresponding radiosities. On the basis of this thermochemical model, an exergy analysis has been carried out, in order to localize the sources and the magnitude of irreversibilities along the components of the stack. In addition, the main synthesis/design variables were varied in order to assess their effect on the exergy destruction within the component to which the parameter directly refers ('endogenous' contribution) and on the exergy destruction of all remaining components ('exogenous' contribution). Then, this analysis

  13. HOMOGENEOUS UGRIZ PHOTOMETRY FOR ACS VIRGO CLUSTER SURVEY GALAXIES: A NON-PARAMETRIC ANALYSIS FROM SDSS IMAGING

    International Nuclear Information System (INIS)

    Chen, Chin-Wei; Cote, Patrick; Ferrarese, Laura; West, Andrew A.; Peng, Eric W.

    2010-01-01

    We present photometric and structural parameters for 100 ACS Virgo Cluster Survey (ACSVCS) galaxies based on homogeneous, multi-wavelength (ugriz), wide-field SDSS (DR5) imaging. These early-type galaxies, which trace out the red sequence in the Virgo Cluster, span a factor of nearly ∼10 3 in g-band luminosity. We describe an automated pipeline that generates background-subtracted mosaic images, masks field sources and measures mean shapes, total magnitudes, effective radii, and effective surface brightnesses using a model-independent approach. A parametric analysis of the surface brightness profiles is also carried out to obtain Sersic-based structural parameters and mean galaxy colors. We compare the galaxy parameters to those in the literature, including those from the ACSVCS, finding good agreement in most cases, although the sizes of the brightest, and most extended, galaxies are found to be most uncertain and model dependent. Our photometry provides an external measurement of the random errors on total magnitudes from the widely used Virgo Cluster Catalog, which we estimate to be σ(B T )∼ 0.13 mag for the brightest galaxies, rising to ∼ 0.3 mag for galaxies at the faint end of our sample (B T ∼ 16). The distribution of axial ratios of low-mass ( d warf ) galaxies bears a strong resemblance to the one observed for the higher-mass ( g iant ) galaxies. The global structural parameters for the full galaxy sample-profile shape, effective radius, and mean surface brightness-are found to vary smoothly and systematically as a function of luminosity, with unmistakable evidence for changes in structural homology along the red sequence. As noted in previous studies, the ugriz galaxy colors show a nonlinear but smooth variation over a ∼7 mag range in absolute magnitude, with an enhanced scatter for the faintest systems that is likely the signature of their more diverse star formation histories.

  14. Sensitivity Analysis Based on Markovian Integration by Parts Formula

    Directory of Open Access Journals (Sweden)

    Yongsheng Hang

    2017-10-01

    Full Text Available Sensitivity analysis is widely applied in financial risk management and engineering; it describes the variations brought by the changes of parameters. Since the integration by parts technique for Markov chains is well developed in recent years, in this paper we apply it for computation of sensitivity and show the closed-form expressions for two commonly-used time-continuous Markovian models. By comparison, we conclude that our approach outperforms the existing technique of computing sensitivity on Markovian models.

  15. Advanced Fuel Cycle Economic Sensitivity Analysis

    Energy Technology Data Exchange (ETDEWEB)

    David Shropshire; Kent Williams; J.D. Smith; Brent Boore

    2006-12-01

    A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.

  16. The role of sensitivity analysis in assessing uncertainty

    International Nuclear Information System (INIS)

    Crick, M.J.; Hill, M.D.

    1987-01-01

    Outside the specialist world of those carrying out performance assessments considerable confusion has arisen about the meanings of sensitivity analysis and uncertainty analysis. In this paper we attempt to reduce this confusion. We then go on to review approaches to sensitivity analysis within the context of assessing uncertainty, and to outline the types of test available to identify sensitive parameters, together with their advantages and disadvantages. The views expressed in this paper are those of the authors; they have not been formally endorsed by the National Radiological Protection Board and should not be interpreted as Board advice

  17. Analysis of Sensitivity Experiments - An Expanded Primer

    Science.gov (United States)

    2017-03-08

    conducted with this purpose in mind. Due diligence must be paid to the structure of the dosage levels and to the number of trials. The chosen data...analysis. System reliability is of paramount importance for protecting both the investment of funding and human life . Failing to accurately estimate

  18. Sensitivity analysis of hybrid thermoelastic techniques

    Science.gov (United States)

    W.A. Samad; J.M. Considine

    2017-01-01

    Stress functions have been used as a complementary tool to support experimental techniques, such as thermoelastic stress analysis (TSA) and digital image correlation (DIC), in an effort to evaluate the complete and separate full-field stresses of loaded structures. The need for such coupling between experimental data and stress functions is due to the fact that...

  19. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  20. Robust Stability Clearance of Flight Control Law Based on Global Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Liuli Ou

    2014-01-01

    Full Text Available To validate the robust stability of the flight control system of hypersonic flight vehicle, which suffers from a large number of parametrical uncertainties, a new clearance framework based on structural singular value (μ theory and global uncertainty sensitivity analysis (SA is proposed. In this framework, SA serves as the preprocess of uncertain model to be analysed to help engineers to determine which uncertainties affect the stability of the closed loop system more slightly. By ignoring these unimportant uncertainties, the calculation of μ can be simplified. Instead of analysing the effect of uncertainties on μ which involves solving optimal problems repeatedly, a simpler stability analysis function which represents the effect of uncertainties on closed loop poles is proposed. Based on this stability analysis function, Sobol’s method, the most widely used global SA method, is extended and applied to the new clearance framework due to its suitability for system with strong nonlinearity and input factors varying in large interval, as well as input factors subjecting to random distributions. In this method, the sensitive indices can be estimated via Monte Carlo simulation conveniently. An example is given to illustrate the efficiency of the proposed method.

  1. Sensitivity analysis of the relationship between disease occurrence and distance from a putative source of pollution

    Directory of Open Access Journals (Sweden)

    Emanuela Dreassi

    2008-05-01

    Full Text Available The relation between disease risk and a point source of pollution is usually investigated using distance from the source as a proxy of exposure. The analysis may be based on case-control data or on aggregated data. The definition of the function relating risk of disease and distance is critical, both in a classical and in a Bayesian framework, because the likelihood is usually very flat, even with large amounts of data. In this paper we investigate how the specification of the function relating risk of disease with distance from the source and of the prior distributions on the parameters of the function affects the results when case-control data and Bayesian methods are used. We consider different popular parametric models for the risk distance function in a Bayesian approach, comparing estimates with those derived by maximum likelihood. As an example we have analyzed the relationship between a putative source of environmental pollution (an asbestos cement plant and the occurrence of pleural malignant mesothelioma in the area of Casale Monferrato (Italy in 1987-1993. Risk of pleural malignant mesothelioma turns out to be strongly related to distance from the asbestos cement plant. However, as the models appeared to be sensitive to modeling choices, we suggest that any analysis of disease risk around a putative source should be integrated with a careful sensitivity analysis and possibly with prior knowledge. The choice of prior distribution is extremely important and should be based on epidemiological considerations.

  2. Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics

    DEFF Research Database (Denmark)

    Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter

    2014-01-01

    We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, High-Dimensional Model Representation and Total Sensitivity Indices. The approach has a general applic...

  3. Global and Local Sensitivity Analysis Methods for a Physical System

    Science.gov (United States)

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  4. Adjoint sensitivity analysis of high frequency structures with Matlab

    CERN Document Server

    Bakr, Mohamed; Demir, Veysel

    2017-01-01

    This book covers the theory of adjoint sensitivity analysis and uses the popular FDTD (finite-difference time-domain) method to show how wideband sensitivities can be efficiently estimated for different types of materials and structures. It includes a variety of MATLAB® examples to help readers absorb the content more easily.

  5. Power of non-parametric linkage analysis in mapping genes contributing to human longevity in long-lived sib-pairs

    DEFF Research Database (Denmark)

    Tan, Qihua; Zhao, J H; Iachine, I

    2004-01-01

    This report investigates the power issue in applying the non-parametric linkage analysis of affected sib-pairs (ASP) [Kruglyak and Lander, 1995: Am J Hum Genet 57:439-454] to localize genes that contribute to human longevity using long-lived sib-pairs. Data were simulated by introducing a recently...... developed statistical model for measuring marker-longevity associations [Yashin et al., 1999: Am J Hum Genet 65:1178-1193], enabling direct power comparison between linkage and association approaches. The non-parametric linkage (NPL) scores estimated in the region harboring the causal allele are evaluated...... in case of a dominant effect. Although the power issue may depend heavily on the true genetic nature in maintaining survival, our study suggests that results from small-scale sib-pair investigations should be referred with caution, given the complexity of human longevity....

  6. On Parametric (and Non-Parametric Variation

    Directory of Open Access Journals (Sweden)

    Neil Smith

    2009-11-01

    Full Text Available This article raises the issue of the correct characterization of ‘Parametric Variation’ in syntax and phonology. After specifying their theoretical commitments, the authors outline the relevant parts of the Principles–and–Parameters framework, and draw a three-way distinction among Universal Principles, Parameters, and Accidents. The core of the contribution then consists of an attempt to provide identity criteria for parametric, as opposed to non-parametric, variation. Parametric choices must be antecedently known, and it is suggested that they must also satisfy seven individually necessary and jointly sufficient criteria. These are that they be cognitively represented, systematic, dependent on the input, deterministic, discrete, mutually exclusive, and irreversible.

  7. Dispersion sensitivity analysis & consistency improvement of APFSDS

    Directory of Open Access Journals (Sweden)

    Sangeeta Sharma Panda

    2017-08-01

    In Bore Balloting Motion simulation shows that reduction in residual spin by about 5% results in drastic 56% reduction in first maximum yaw. A correlation between first maximum yaw and residual spin is observed. Results of data analysis are used in design modification for existing ammunition. Number of designs are evaluated numerically before freezing five designs for further soundings. These designs are critically assessed in terms of their comparative performance during In-bore travel & external ballistics phase. Results are validated by free flight trials for the finalised design.

  8. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    Science.gov (United States)

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  9. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling.

    Science.gov (United States)

    Núñez, M; Robie, T; Vlachos, D G

    2017-10-28

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  10. Sensitivity analysis of the RESRAD, a dose assessment code

    International Nuclear Information System (INIS)

    Yu, C.; Cheng, J.J.; Zielen, A.J.

    1991-01-01

    The RESRAD code is a pathway analysis code that is designed to calculate radiation doses and derive soil cleanup criteria for the US Department of Energy's environmental restoration and waste management program. the RESRAD code uses various pathway and consumption-rate parameters such as soil properties and food ingestion rates in performing such calculations and derivations. As with any predictive model, the accuracy of the predictions depends on the accuracy of the input parameters. This paper summarizes the results of a sensitivity analysis of RESRAD input parameters. Three methods were used to perform the sensitivity analysis: (1) Gradient Enhanced Software System (GRESS) sensitivity analysis software package developed at oak Ridge National Laboratory; (2) direct perturbation of input parameters; and (3) built-in graphic package that shows parameter sensitivities while the RESRAD code is operational

  11. A sensitivity analysis approach to optical parameters of scintillation detectors

    International Nuclear Information System (INIS)

    Ghal-Eh, N.; Koohi-Fayegh, R.

    2008-01-01

    In this study, an extended version of the Monte Carlo light transport code, PHOTRACK, has been used for a sensitivity analysis to estimate the importance of different wavelength-dependent parameters in the modelling of light collection process in scintillators

  12. sensitivity analysis on flexible road pavement life cycle cost model

    African Journals Online (AJOL)

    user

    of sensitivity analysis on a developed flexible pavement life cycle cost model using varying discount rate. The study .... organizations and specific projects needs based. Life-cycle ... developed and completed urban road infrastructure corridor ...

  13. Sobol’ sensitivity analysis for stressor impacts on honeybee colonies

    Science.gov (United States)

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather...

  14. Experimental Design for Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2001-01-01

    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as

  15. Sensitivity analysis of numerical solutions for environmental fluid problems

    International Nuclear Information System (INIS)

    Tanaka, Nobuatsu; Motoyama, Yasunori

    2003-01-01

    In this study, we present a new numerical method to quantitatively analyze the error of numerical solutions by using the sensitivity analysis. If a reference case of typical parameters is one calculated with the method, no additional calculation is required to estimate the results of the other numerical parameters such as more detailed solutions. Furthermore, we can estimate the strict solution from the sensitivity analysis results and can quantitatively evaluate the reliability of the numerical solution by calculating the numerical error. (author)

  16. Behavioral metabolomics analysis identifies novel neurochemical signatures in methamphetamine sensitization

    Science.gov (United States)

    Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.

    2014-01-01

    Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544

  17. Risk and sensitivity analysis in relation to external events

    International Nuclear Information System (INIS)

    Alzbutas, R.; Urbonas, R.; Augutis, J.

    2001-01-01

    This paper presents risk and sensitivity analysis of external events impacts on the safe operation in general and in particular the Ignalina Nuclear Power Plant safety systems. Analysis is based on the deterministic and probabilistic assumptions and assessment of the external hazards. The real statistic data are used as well as initial external event simulation. The preliminary screening criteria are applied. The analysis of external event impact on the NPP safe operation, assessment of the event occurrence, sensitivity analysis, and recommendations for safety improvements are performed for investigated external hazards. Such events as aircraft crash, extreme rains and winds, forest fire and flying parts of the turbine are analysed. The models are developed and probabilities are calculated. As an example for sensitivity analysis the model of aircraft impact is presented. The sensitivity analysis takes into account the uncertainty features raised by external event and its model. Even in case when the external events analysis show rather limited danger, the sensitivity analysis can determine the highest influence causes. These possible variations in future can be significant for safety level and risk based decisions. Calculations show that external events cannot significantly influence the safety level of the Ignalina NPP operation, however the events occurrence and propagation can be sufficiently uncertain.(author)

  18. High sensitivity analysis of atmospheric gas elements

    International Nuclear Information System (INIS)

    Miwa, Shiro; Nomachi, Ichiro; Kitajima, Hideo

    2006-01-01

    We have investigated the detection limit of H, C and O in Si, GaAs and InP using a Cameca IMS-4f instrument equipped with a modified vacuum system to improve the detection limit with a lower sputtering rate We found that the detection limits for H, O and C are improved by employing a primary ion bombardment before the analysis. Background levels of 1 x 10 17 atoms/cm 3 for H, of 3 x 10 16 atoms/cm 3 for C and of 2 x 10 16 atoms/cm 3 for O could be achieved in silicon with a sputtering rate of 2 nm/s after a primary ion bombardment for 160 h. We also found that the use of a 20 K He cryo-panel near the sample holder was effective for obtaining better detection limits in a shorter time, although the final detection limits using the panel are identical to those achieved without it

  19. High sensitivity analysis of atmospheric gas elements

    Energy Technology Data Exchange (ETDEWEB)

    Miwa, Shiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan)]. E-mail: Shiro.Miwa@jp.sony.com; Nomachi, Ichiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan); Kitajima, Hideo [Nanotechnos Corp., 5-4-30 Nishihashimoto, Sagamihara 229-1131 (Japan)

    2006-07-30

    We have investigated the detection limit of H, C and O in Si, GaAs and InP using a Cameca IMS-4f instrument equipped with a modified vacuum system to improve the detection limit with a lower sputtering rate We found that the detection limits for H, O and C are improved by employing a primary ion bombardment before the analysis. Background levels of 1 x 10{sup 17} atoms/cm{sup 3} for H, of 3 x 10{sup 16} atoms/cm{sup 3} for C and of 2 x 10{sup 16} atoms/cm{sup 3} for O could be achieved in silicon with a sputtering rate of 2 nm/s after a primary ion bombardment for 160 h. We also found that the use of a 20 K He cryo-panel near the sample holder was effective for obtaining better detection limits in a shorter time, although the final detection limits using the panel are identical to those achieved without it.

  20. Sensitivity Analysis of BLISK Airfoil Wear †

    Directory of Open Access Journals (Sweden)

    Andreas Kellersmann

    2018-05-01

    Full Text Available The decreasing performance of jet engines during operation is a major concern for airlines and maintenance companies. Among other effects, the erosion of high-pressure compressor (HPC blades is a critical one and leads to a changed aerodynamic behavior, and therefore to a change in performance. The maintenance of BLISKs (blade-integrated-disks is especially challenging because the blade arrangement cannot be changed and individual blades cannot be replaced. Thus, coupled deteriorated blades have a complex aerodynamic behavior which can have a stronger influence on compressor performance than a conventional HPC. To ensure effective maintenance for BLISKs, the impact of coupled misshaped blades are the key factor. The present study addresses these effects on the aerodynamic performance of a first-stage BLISK of a high-pressure compressor. Therefore, a design of experiments (DoE is done to identify the geometric properties which lead to a reduction in performance. It is shown that the effect of coupled variances is dependent on the operating point. Based on the DoE analysis, the thickness-related parameters, the stagger angle, and the max. profile camber as coupled parameters are identified as the most important parameters for all operating points.

  1. Sensitivity analysis for matched pair analysis of binary data: From worst case to average case analysis.

    Science.gov (United States)

    Hasegawa, Raiden; Small, Dylan

    2017-12-01

    In matched observational studies where treatment assignment is not randomized, sensitivity analysis helps investigators determine how sensitive their estimated treatment effect is to some unmeasured confounder. The standard approach calibrates the sensitivity analysis according to the worst case bias in a pair. This approach will result in a conservative sensitivity analysis if the worst case bias does not hold in every pair. In this paper, we show that for binary data, the standard approach can be calibrated in terms of the average bias in a pair rather than worst case bias. When the worst case bias and average bias differ, the average bias interpretation results in a less conservative sensitivity analysis and more power. In many studies, the average case calibration may also carry a more natural interpretation than the worst case calibration and may also allow researchers to incorporate additional data to establish an empirical basis with which to calibrate a sensitivity analysis. We illustrate this with a study of the effects of cellphone use on the incidence of automobile accidents. Finally, we extend the average case calibration to the sensitivity analysis of confidence intervals for attributable effects. © 2017, The International Biometric Society.

  2. PARAMETRIC MODEL OF LUMBAR VERTEBRA

    Directory of Open Access Journals (Sweden)

    CAPPETTI Nicola

    2010-11-01

    Full Text Available The present work proposes the realization of a parametric/variational CAD model of a normotype lumbar vertebra, which could be used for improving the effectiveness of actual imaging techniques in informational augmentation of the orthopaedic and traumatological diagnosis. In addition it could be used for ergonomic static and dynamical analysis of the lumbar region and vertebral column.

  3. Application of Stochastic Sensitivity Analysis to Integrated Force Method

    Directory of Open Access Journals (Sweden)

    X. F. Wei

    2012-01-01

    Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.

  4. The EVEREST project: sensitivity analysis of geological disposal systems

    International Nuclear Information System (INIS)

    Marivoet, Jan; Wemaere, Isabelle; Escalier des Orres, Pierre; Baudoin, Patrick; Certes, Catherine; Levassor, Andre; Prij, Jan; Martens, Karl-Heinz; Roehlig, Klaus

    1997-01-01

    The main objective of the EVEREST project is the evaluation of the sensitivity of the radiological consequences associated with the geological disposal of radioactive waste to the different elements in the performance assessment. Three types of geological host formations are considered: clay, granite and salt. The sensitivity studies that have been carried out can be partitioned into three categories according to the type of uncertainty taken into account: uncertainty in the model parameters, uncertainty in the conceptual models and uncertainty in the considered scenarios. Deterministic as well as stochastic calculational approaches have been applied for the sensitivity analyses. For the analysis of the sensitivity to parameter values, the reference technique, which has been applied in many evaluations, is stochastic and consists of a Monte Carlo simulation followed by a linear regression. For the analysis of conceptual model uncertainty, deterministic and stochastic approaches have been used. For the analysis of uncertainty in the considered scenarios, mainly deterministic approaches have been applied

  5. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  6. Multiple predictor smoothing methods for sensitivity analysis: Example results

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  7. Carbon dioxide capture processes: Simulation, design and sensitivity analysis

    DEFF Research Database (Denmark)

    Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul

    2012-01-01

    equilibrium and associated property models are used. Simulations are performed to investigate the sensitivity of the process variables to change in the design variables including process inputs and disturbances in the property model parameters. Results of the sensitivity analysis on the steady state...... performance of the process to the L/G ratio to the absorber, CO2 lean solvent loadings, and striper pressure are presented in this paper. Based on the sensitivity analysis process optimization problems have been defined and solved and, a preliminary control structure selection has been made.......Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...

  8. Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2014-11-01

    Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.

  9. Voxel-based analysis of cerebral glucose metabolism in AD and non-AD degenerative dementia using statistical parametric mapping

    International Nuclear Information System (INIS)

    Li Zugui; Gao Shuo; Zhang Benshu; Ma Aijun; Cai Li; Li Dacheng; Li Yansheng; Liu Lei

    2008-01-01

    Objective: It is know that Alzheimer's disease (AD) and non-AD degenerative dementia have some clinical features in common. The aim of this study was to investigate the specific patterns of regional, cerebral glucose metabolism of AD and non-AD degenerative dementia patients, using a voxel-based 18 F-fluorodeoxyglucose (FDG) PET study. Methods: Twenty-three AD patients and 24 non-AD degenerative dementia patients including 9 Parkinson's disease with dementia(PDD), 7 frontal-temporal dementia (FTD), 8 dementia of Lewy bodies (DLB) patients, and 40 normal controls (NC)were included in the study. To evaluate the relative cerebral metabolic rate of glucose (rCMRglc), 18 F-FDG PET imaging was performed in all subjects. Subsequently, statistical comparison of PET data with NC was performed using statistical parametric mapping (SPM). Results: The AD-associated FDG imaging pattern typically presented as focal cortical hypometabolism in bilateral parietotemporal association cortes and(or) frontal lobe and the posterior cingulate gyms. As compared with the comparative NC, FTD group demonstrated significant regional reductions in rCMRglc in bilateral frontal, parietal lobes, the cingulate gyri, insulae, left precuneus, and the subcortical structures (including right putamen, right medial dorsal nucleus and ventral anterior nucleus). The PDD group showed regional reductions in rCMRglc in bilateral frontal cortexes, parietotemporal association cortexes, and the subcortical structures (including left caudate, right putamen, the dorsomedial thalamus, lateral posterior nucleus, and pulvinar). By the voxel-by-voxel comparison between the DLB group and NC group, regional reductions in rCMRglc included bilateral occipital cortexes, precuneuses, frontal and parietal lobes, left anterior cingulate gyms, right superior temporal cortex, and the subcortical structures including putamen, caudate, lateral posterior nucleus, and pulvinar. Conclusions: The rCMRglc was found to be different

  10. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Directory of Open Access Journals (Sweden)

    W. Castaings

    2009-04-01

    Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.

    In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.

    It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.

    For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.

    Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  11. Allergen Sensitization Pattern by Sex: A Cluster Analysis in Korea.

    Science.gov (United States)

    Ohn, Jungyoon; Paik, Seung Hwan; Doh, Eun Jin; Park, Hyun-Sun; Yoon, Hyun-Sun; Cho, Soyun

    2017-12-01

    Allergens tend to sensitize simultaneously. Etiology of this phenomenon has been suggested to be allergen cross-reactivity or concurrent exposure. However, little is known about specific allergen sensitization patterns. To investigate the allergen sensitization characteristics according to gender. Multiple allergen simultaneous test (MAST) is widely used as a screening tool for detecting allergen sensitization in dermatologic clinics. We retrospectively reviewed the medical records of patients with MAST results between 2008 and 2014 in our Department of Dermatology. A cluster analysis was performed to elucidate the allergen-specific immunoglobulin (Ig)E cluster pattern. The results of MAST (39 allergen-specific IgEs) from 4,360 cases were analyzed. By cluster analysis, 39items were grouped into 8 clusters. Each cluster had characteristic features. When compared with female, the male group tended to be sensitized more frequently to all tested allergens, except for fungus allergens cluster. The cluster and comparative analysis results demonstrate that the allergen sensitization is clustered, manifesting allergen similarity or co-exposure. Only the fungus cluster allergens tend to sensitize female group more frequently than male group.

  12. Sensitivity analysis of the noise-induced oscillatory multistability in Higgins model of glycolysis

    Science.gov (United States)

    Ryashko, Lev

    2018-03-01

    A phenomenon of the noise-induced oscillatory multistability in glycolysis is studied. As a basic deterministic skeleton, we consider the two-dimensional Higgins model. The noise-induced generation of mixed-mode stochastic oscillations is studied in various parametric zones. Probabilistic mechanisms of the stochastic excitability of equilibria and noise-induced splitting of randomly forced cycles are analysed by the stochastic sensitivity function technique. A parametric zone of supersensitive Canard-type cycles is localized and studied in detail. It is shown that the generation of mixed-mode stochastic oscillations is accompanied by the noise-induced transitions from order to chaos.

  13. Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes

    International Nuclear Information System (INIS)

    Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae

    2016-01-01

    Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed

  14. Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae [NESS, Daejeon (Korea, Republic of)

    2016-10-15

    Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed.

  15. Variance in parametric images: direct estimation from parametric projections

    International Nuclear Information System (INIS)

    Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.

    2000-01-01

    Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)

  16. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  17. A tool model for predicting atmospheric kinetics with sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.

  18. Sensitivity analysis of the nuclear data for MYRRHA reactor modelling

    International Nuclear Information System (INIS)

    Stankovskiy, Alexey; Van den Eynde, Gert; Cabellos, Oscar; Diez, Carlos J.; Schillebeeckx, Peter; Heyse, Jan

    2014-01-01

    A global sensitivity analysis of effective neutron multiplication factor k eff to the change of nuclear data library revealed that JEFF-3.2T2 neutron-induced evaluated data library produces closer results to ENDF/B-VII.1 than does JEFF-3.1.2. The analysis of contributions of individual evaluations into k eff sensitivity allowed establishing the priority list of nuclides for which uncertainties on nuclear data must be improved. Detailed sensitivity analysis has been performed for two nuclides from this list, 56 Fe and 238 Pu. The analysis was based on a detailed survey of the evaluations and experimental data. To track the origin of the differences in the evaluations and their impact on k eff , the reaction cross-sections and multiplicities in one evaluation have been substituted by the corresponding data from other evaluations. (authors)

  19. Parametric Room Acoustic Workflows

    DEFF Research Database (Denmark)

    Parigi, Dario; Svidt, Kjeld; Molin, Erik

    2017-01-01

    The paper investigates and assesses different room acoustics software and the opportunities they offer to engage in parametric acoustics workflow and to influence architectural designs. The first step consists in the testing and benchmarking of different tools on the basis of accuracy, speed...... and interoperability with Grasshopper 3d. The focus will be placed to the benchmarking of three different acoustic analysis tools based on raytracing. To compare the accuracy and speed of the acoustic evaluation across different tools, a homogeneous set of acoustic parameters is chosen. The room acoustics parameters...... included in the set are reverberation time (EDT, RT30), clarity (C50), loudness (G), and definition (D50). Scenarios are discussed for determining at different design stages the most suitable acoustic tool. Those scenarios are characterized, by the use of less accurate but fast evaluation tools to be used...

  20. Deterministic Local Sensitivity Analysis of Augmented Systems - I: Theory

    International Nuclear Information System (INIS)

    Cacuci, Dan G.; Ionescu-Bujor, Mihaela

    2005-01-01

    This work provides the theoretical foundation for the modular implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for large-scale simulation systems. The implementation of the ASAP commences with a selected code module and then proceeds by augmenting the size of the adjoint sensitivity system, module by module, until the entire system is completed. Notably, the adjoint sensitivity system for the augmented system can often be solved by using the same numerical methods used for solving the original, nonaugmented adjoint system, particularly when the matrix representation of the adjoint operator for the augmented system can be inverted by partitioning

  1. The identification of model effective dimensions using global sensitivity analysis

    International Nuclear Information System (INIS)

    Kucherenko, Sergei; Feil, Balazs; Shah, Nilay; Mauntz, Wolfgang

    2011-01-01

    It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.

  2. The identification of model effective dimensions using global sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kucherenko, Sergei, E-mail: s.kucherenko@ic.ac.u [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Feil, Balazs [Department of Process Engineering, University of Pannonia, Veszprem (Hungary); Shah, Nilay [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Mauntz, Wolfgang [Lehrstuhl fuer Anlagensteuerungstechnik, Fachbereich Chemietechnik, Universitaet Dortmund (Germany)

    2011-04-15

    It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.

  3. Application of Sensitivity Analysis in Design of Sustainable Buildings

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik; Rasmussen, Henrik

    2009-01-01

    satisfies the design objectives and criteria. In the design of sustainable buildings, it is beneficial to identify the most important design parameters in order to more efficiently develop alternative design solutions or reach optimized design solutions. Sensitivity analyses make it possible to identify...... possible to influence the most important design parameters. A methodology of sensitivity analysis is presented and an application example is given for design of an office building in Denmark....

  4. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    Science.gov (United States)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  5. Sensitivity analysis of network DEA illustrated in branch banking

    OpenAIRE

    N. Avkiran

    2010-01-01

    Users of data envelopment analysis (DEA) often presume efficiency estimates to be robust. While traditional DEA has been exposed to various sensitivity studies, network DEA (NDEA) has so far escaped similar scrutiny. Thus, there is a need to investigate the sensitivity of NDEA, further compounded by the recent attention it has been receiving in literature. NDEA captures the underlying performance information found in a firm?s interacting divisions or sub-processes that would otherwise remain ...

  6. Sensitivity analysis of periodic errors in heterodyne interferometry

    International Nuclear Information System (INIS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-01-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors

  7. Sensitivity analysis of periodic errors in heterodyne interferometry

    Science.gov (United States)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  8. MOVES sensitivity analysis update : Transportation Research Board Summer Meeting 2012 : ADC-20 Air Quality Committee

    Science.gov (United States)

    2012-01-01

    OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study

  9. Sensitivity analysis of the reactor safety study. Final report

    International Nuclear Information System (INIS)

    Parkinson, W.J.; Rasmussen, N.C.; Hinkle, W.D.

    1979-01-01

    The Reactor Safety Study (RSS) or Wash 1400 developed a methodology estimating the public risk from light water nuclear reactors. In order to give further insights into this study, a sensitivity analysis has been performed to determine the significant contributors to risk for both the PWR and BWR. The sensitivity to variation of the point values of the failure probabilities reported in the RSS was determined for the safety systems identified therein, as well as for many of the generic classes from which individual failures contributed to system failures. Increasing as well as decreasing point values were considered. An analysis of the sensitivity to increasing uncertainty in system failure probabilities was also performed. The sensitivity parameters chosen were release category probabilities, core melt probability, and the risk parameters of early fatalities, latent cancers and total property damage. The latter three are adequate for describing all public risks identified in the RSS. The results indicate reductions of public risk by less than a factor of two for factor reductions in system or generic failure probabilities as high as one hundred. There also appears to be more benefit in monitoring the most sensitive systems to verify adherence to RSS failure rates than to backfitting present reactors. The sensitivity analysis results do indicate, however, possible benefits in reducing human error rates

  10. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  11. Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks

    Directory of Open Access Journals (Sweden)

    Harry R. Millwater

    2006-01-01

    Full Text Available A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel and common damage mechanisms (inherent defects or surface damage can be considered. The derivation is developed for Monte Carlo sampling such that the existing failure samples are used and the sensitivities are obtained with minimal additional computational time. Variance estimates and confidence bounds of the sensitivity estimates are developed. The methodology is demonstrated and verified using a multizone probabilistic fatigue analysis of a gas turbine compressor disk analysis considering stress scatter, crack growth propagation scatter, and initial crack size as random variables.

  12. Application of sensitivity analysis for optimized piping support design

    International Nuclear Information System (INIS)

    Tai, K.; Nakatogawa, T.; Hisada, T.; Noguchi, H.; Ichihashi, I.; Ogo, H.

    1993-01-01

    The objective of this study was to see if recent developments in non-linear sensitivity analysis could be applied to the design of nuclear piping systems which use non-linear supports and to develop a practical method of designing such piping systems. In the study presented in this paper, the seismic response of a typical piping system was analyzed using a dynamic non-linear FEM and a sensitivity analysis was carried out. Then optimization for the design of the piping system supports was investigated, selecting the support location and yield load of the non-linear supports (bi-linear model) as main design parameters. It was concluded that the optimized design was a matter of combining overall system reliability with the achievement of an efficient damping effect from the non-linear supports. The analysis also demonstrated sensitivity factors are useful in the planning stage of support design. (author)

  13. Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model

    International Nuclear Information System (INIS)

    Otis, M.D.

    1983-01-01

    Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs

  14. Sensitivity analysis for missing data in regulatory submissions.

    Science.gov (United States)

    Permutt, Thomas

    2016-07-30

    The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  15. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    Science.gov (United States)

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  16. CATDAT : A Program for Parametric and Nonparametric Categorical Data Analysis : User's Manual Version 1.0, 1998-1999 Progress Report.

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, James T.

    1999-12-01

    Natural resource professionals are increasingly required to develop rigorous statistical models that relate environmental data to categorical responses data. Recent advances in the statistical and computing sciences have led to the development of sophisticated methods for parametric and nonparametric analysis of data with categorical responses. The statistical software package CATDAT was designed to make some of these relatively new and powerful techniques available to scientists. The CATDAT statistical package includes 4 analytical techniques: generalized logit modeling; binary classification tree; extended K-nearest neighbor classification; and modular neural network.

  17. Parametric analysis of neutron streaming through major penetrations in the 0.914 m TFTR test cell floor

    International Nuclear Information System (INIS)

    Ku, L.P.; Liew, S.L.; Kolibal, J.G.

    1985-09-01

    Neutron streaming through penetrations in the 0.914 m TFTR test cell floor has two distinct features: (1) the oblique angle of incidence; and (2) the high order of anisotropy in the angular distribution for incident neutrons with energies > 10 keV. The effects of these features on the neutron streaming into the TFTR basement were studied parametrically for isolated penetrations. Variations with respect to the source energies, angular distributions, and sizes of the penetrations were made. The results form a data base from which the spatial distribution of the neutron flux in the basement due to multiple penetrations may be evaluated

  18. Variance estimation for sensitivity analysis of poverty and inequality measures

    Directory of Open Access Journals (Sweden)

    Christian Dudel

    2017-04-01

    Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.

  19. Sensitivity analysis of water consumption in an office building

    Science.gov (United States)

    Suchacek, Tomas; Tuhovcak, Ladislav; Rucka, Jan

    2018-02-01

    This article deals with sensitivity analysis of real water consumption in an office building. During a long-term real study, reducing of pressure in its water connection was simulated. A sensitivity analysis of uneven water demand was conducted during working time at various provided pressures and at various time step duration. Correlations between maximal coefficients of water demand variation during working time and provided pressure were suggested. The influence of provided pressure in the water connection on mean coefficients of water demand variation was pointed out, altogether for working hours of all days and separately for days with identical working hours.

  20. Probabilistic and sensitivity analysis of Botlek Bridge structures

    Directory of Open Access Journals (Sweden)

    Králik Juraj

    2017-01-01

    Full Text Available This paper deals with the probabilistic and sensitivity analysis of the largest movable lift bridge of the world. The bridge system consists of six reinforced concrete pylons and two steel decks 4000 tons weight each connected through ropes with counterweights. The paper focuses the probabilistic and sensitivity analysis as the base of dynamic study in design process of the bridge. The results had a high importance for practical application and design of the bridge. The model and resistance uncertainties were taken into account in LHS simulation method.

  1. Applying DEA sensitivity analysis to efficiency measurement of Vietnamese universities

    Directory of Open Access Journals (Sweden)

    Thi Thanh Huyen Nguyen

    2015-11-01

    Full Text Available The primary purpose of this study is to measure the technical efficiency of 30 doctorate-granting universities, the universities or the higher education institutes with PhD training programs, in Vietnam, applying the sensitivity analysis of data envelopment analysis (DEA. The study uses eight sets of input-output specifications using the replacement as well as aggregation/disaggregation of variables. The measurement results allow us to examine the sensitivity of the efficiency of these universities with the sets of variables. The findings also show the impact of variables on their efficiency and its “sustainability”.

  2. Seismic analysis of steam generator and parameter sensitivity studies

    International Nuclear Information System (INIS)

    Qian Hao; Xu Dinggen; Yang Ren'an; Liang Xingyun

    2013-01-01

    Background: The steam generator (SG) serves as the primary means for removing the heat generated within the reactor core and is part of the reactor coolant system (RCS) pressure boundary. Purpose: Seismic analysis in required for SG, whose seismic category is Cat. I. Methods: The analysis model of SG is created with moisture separator assembly and tube bundle assembly herein. The seismic analysis is performed with RCS pipe and Reactor Pressure Vessel (RPV). Results: The seismic stress results of SG are obtained. In addition, parameter sensitivities of seismic analysis results are studied, such as the effect of another SG, support, anti-vibration bars (AVBs), and so on. Our results show that seismic results are sensitive to support and AVBs setting. Conclusions: The guidance and comments on these parameters are summarized for equipment design and analysis, which should be focused on in future new type NPP SG's research and design. (authors)

  3. Stochastic and sensitivity analysis of shape error of inflatable antenna reflectors

    Science.gov (United States)

    San, Bingbing; Yang, Qingshan; Yin, Liwei

    2017-03-01

    Inflatable antennas are promising candidates to realize future satellite communications and space observations since they are lightweight, low-cost and small-packaged-volume. However, due to their high flexibility, inflatable reflectors are difficult to manufacture accurately, which may result in undesirable shape errors, and thus affect their performance negatively. In this paper, the stochastic characteristics of shape errors induced during manufacturing process are investigated using Latin hypercube sampling coupled with manufacture simulations. Four main random error sources are involved, including errors in membrane thickness, errors in elastic modulus of membrane, boundary deviations and pressure variations. Using regression and correlation analysis, a global sensitivity study is conducted to rank the importance of these error sources. This global sensitivity analysis is novel in that it can take into account the random variation and the interaction between error sources. Analyses are parametrically carried out with various focal-length-to-diameter ratios (F/D) and aperture sizes (D) of reflectors to investigate their effects on significance ranking of error sources. The research reveals that RMS (Root Mean Square) of shape error is a random quantity with an exponent probability distribution and features great dispersion; with the increase of F/D and D, both mean value and standard deviation of shape errors are increased; in the proposed range, the significance ranking of error sources is independent of F/D and D; boundary deviation imposes the greatest effect with a much higher weight than the others; pressure variation ranks the second; error in thickness and elastic modulus of membrane ranks the last with very close sensitivities to pressure variation. Finally, suggestions are given for the control of the shape accuracy of reflectors and allowable values of error sources are proposed from the perspective of reliability.

  4. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1990-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems

  5. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1991-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab

  6. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  7. Automated sensitivity analysis: New tools for modeling complex dynamic systems

    International Nuclear Information System (INIS)

    Pin, F.G.

    1987-01-01

    Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed

  8. The Volatility of Data Space: Topology Oriented Sensitivity Analysis

    Science.gov (United States)

    Du, Jing; Ligmann-Zielinska, Arika

    2015-01-01

    Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929

  9. Interactive Building Design Space Exploration Using Regionalized Sensitivity Analysis

    DEFF Research Database (Denmark)

    Østergård, Torben; Jensen, Rasmus Lund; Maagaard, Steffen

    2017-01-01

    simulation inputs are most important and which have negligible influence on the model output. Popular sensitivity methods include the Morris method, variance-based methods (e.g. Sobol’s), and regression methods (e.g. SRC). However, all these methods only address one output at a time, which makes it difficult...... in combination with the interactive parallel coordinate plot (PCP). The latter is an effective tool to explore stochastic simulations and to find high-performing building designs. The proposed methods help decision makers to focus their attention to the most important design parameters when exploring......Monte Carlo simulations combined with regionalized sensitivity analysis provide the means to explore a vast, multivariate design space in building design. Typically, sensitivity analysis shows how the variability of model output relates to the uncertainties in models inputs. This reveals which...

  10. Sensitization trajectories in childhood revealed by using a cluster analysis

    DEFF Research Database (Denmark)

    Schoos, Ann-Marie M.; Chawes, Bo L.; Melen, Erik

    2017-01-01

    Prospective Studies on Asthma in Childhood 2000 (COPSAC2000) birth cohort with specific IgE against 13 common food and inhalant allergens at the ages of ½, 1½, 4, and 6 years. An unsupervised cluster analysis for 3-dimensional data (nonnegative sparse parallel factor analysis) was used to extract latent......BACKGROUND: Assessment of sensitization at a single time point during childhood provides limited clinical information. We hypothesized that sensitization develops as specific patterns with respect to age at debut, development over time, and involved allergens and that such patterns might be more...... biologically and clinically relevant. OBJECTIVE: We sought to explore latent patterns of sensitization during the first 6 years of life and investigate whether such patterns associate with the development of asthma, rhinitis, and eczema. METHODS: We investigated 398 children from the at-risk Copenhagen...

  11. Robust and Efficient Parametric Face Alignment

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    2011-01-01

    We propose a correlation-based approach to parametric object alignment particularly suitable for face analysis applications which require efficiency and robustness against occlusions and illumination changes. Our algorithm registers two images by iteratively maximizing their correlation coefficient

  12. Probabilistic sensitivity analysis of system availability using Gaussian processes

    International Nuclear Information System (INIS)

    Daneshkhah, Alireza; Bedford, Tim

    2013-01-01

    The availability of a system under a given failure/repair process is a function of time which can be determined through a set of integral equations and usually calculated numerically. We focus here on the issue of carrying out sensitivity analysis of availability to determine the influence of the input parameters. The main purpose is to study the sensitivity of the system availability with respect to the changes in the main parameters. In the simplest case that the failure repair process is (continuous time/discrete state) Markovian, explicit formulae are well known. Unfortunately, in more general cases availability is often a complicated function of the parameters without closed form solution. Thus, the computation of sensitivity measures would be time-consuming or even infeasible. In this paper, we show how Sobol and other related sensitivity measures can be cheaply computed to measure how changes in the model inputs (failure/repair times) influence the outputs (availability measure). We use a Bayesian framework, called the Bayesian analysis of computer code output (BACCO) which is based on using the Gaussian process as an emulator (i.e., an approximation) of complex models/functions. This approach allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than other methods. The emulator-based sensitivity measure is used to examine the influence of the failure and repair densities' parameters on the system availability. We discuss how to apply the methods practically in the reliability context, considering in particular the selection of parameters and prior distributions and how we can ensure these may be considered independent—one of the key assumptions of the Sobol approach. The method is illustrated on several examples, and we discuss the further implications of the technique for reliability and maintenance analysis

  13. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  14. Sensitivity Analysis Applied in Design of Low Energy Office Building

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik

    2008-01-01

    satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...

  15. Application of Sensitivity Analysis in Design of Sustainable Buildings

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik; Hesselholt, Allan Tind

    2007-01-01

    satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...

  16. Sensitivity analysis of physiochemical interaction model: which pair ...

    African Journals Online (AJOL)

    ... of two model parameters at a time on the solution trajectory of physiochemical interaction over a time interval. Our aim is to use this powerful mathematical technique to select the important pair of parameters of this physical process which is cost-effective. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 ...

  17. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  18. Sensitivity analysis for contagion effects in social networks

    Science.gov (United States)

    VanderWeele, Tyler J.

    2014-01-01

    Analyses of social network data have suggested that obesity, smoking, happiness and loneliness all travel through social networks. Individuals exert “contagion effects” on one another through social ties and association. These analyses have come under critique because of the possibility that homophily from unmeasured factors may explain these statistical associations and because similar findings can be obtained when the same methodology is applied to height, acne and head-aches, for which the conclusion of contagion effects seems somewhat less plausible. We use sensitivity analysis techniques to assess the extent to which supposed contagion effects for obesity, smoking, happiness and loneliness might be explained away by homophily or confounding and the extent to which the critique using analysis of data on height, acne and head-aches is relevant. Sensitivity analyses suggest that contagion effects for obesity and smoking cessation are reasonably robust to possible latent homophily or environmental confounding; those for happiness and loneliness are somewhat less so. Supposed effects for height, acne and head-aches are all easily explained away by latent homophily and confounding. The methodology that has been employed in past studies for contagion effects in social networks, when used in conjunction with sensitivity analysis, may prove useful in establishing social influence for various behaviors and states. The sensitivity analysis approach can be used to address the critique of latent homophily as a possible explanation of associations interpreted as contagion effects. PMID:25580037

  19. Sensitivity Analysis of a Horizontal Earth Electrode under Impulse ...

    African Journals Online (AJOL)

    This paper presents the sensitivity analysis of an earthing conductor under the influence of impulse current arising from a lightning stroke. The approach is based on the 2nd order finite difference time domain (FDTD). The earthing conductor is regarded as a lossy transmission line where it is divided into series connected ...

  20. Beyond the GUM: variance-based sensitivity analysis in metrology

    International Nuclear Information System (INIS)

    Lira, I

    2016-01-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand. (paper)

  1. Sensitivity analysis of the Ohio phosphorus risk index

    Science.gov (United States)

    The Phosphorus (P) Index is a widely used tool for assessing the vulnerability of agricultural fields to P loss; yet, few of the P Indices developed in the U.S. have been evaluated for their accuracy. Sensitivity analysis is one approach that can be used prior to calibration and field-scale testing ...

  2. Sensitivity analysis for oblique incidence reflectometry using Monte Carlo simulations

    DEFF Research Database (Denmark)

    Kamran, Faisal; Andersen, Peter E.

    2015-01-01

    profiles. This article presents a sensitivity analysis of the technique in turbid media. Monte Carlo simulations are used to investigate the technique and its potential to distinguish the small changes between different levels of scattering. We present various regions of the dynamic range of optical...

  3. Omitted Variable Sensitivity Analysis with the Annotated Love Plot

    Science.gov (United States)

    Hansen, Ben B.; Fredrickson, Mark M.

    2014-01-01

    The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…

  4. Weighting-Based Sensitivity Analysis in Causal Mediation Studies

    Science.gov (United States)

    Hong, Guanglei; Qin, Xu; Yang, Fan

    2018-01-01

    Through a sensitivity analysis, the analyst attempts to determine whether a conclusion of causal inference could be easily reversed by a plausible violation of an identification assumption. Analytic conclusions that are harder to alter by such a violation are expected to add a higher value to scientific knowledge about causality. This article…

  5. Sensitivity analysis of railpad parameters on vertical railway track dynamics

    NARCIS (Netherlands)

    Oregui Echeverria-Berreyarza, M.; Nunez Vicencio, Alfredo; Dollevoet, R.P.B.J.; Li, Z.

    2016-01-01

    This paper presents a sensitivity analysis of railpad parameters on vertical railway track dynamics, incorporating the nonlinear behavior of the fastening (i.e., downward forces compress the railpad whereas upward forces are resisted by the clamps). For this purpose, solid railpads, rail-railpad

  6. Methods for global sensitivity analysis in life cycle assessment

    NARCIS (Netherlands)

    Groen, Evelyne A.; Bokkers, Eddy; Heijungs, Reinout; Boer, de Imke J.M.

    2017-01-01

    Purpose: Input parameters required to quantify environmental impact in life cycle assessment (LCA) can be uncertain due to e.g. temporal variability or unknowns about the true value of emission factors. Uncertainty of environmental impact can be analysed by means of a global sensitivity analysis to

  7. Sensitivity analysis on ultimate strength of aluminium stiffened panels

    DEFF Research Database (Denmark)

    Rigo, P.; Sarghiuta, R.; Estefen, S.

    2003-01-01

    This paper presents the results of an extensive sensitivity analysis carried out by the Committee III.1 "Ultimate Strength" of ISSC?2003 in the framework of a benchmark on the ultimate strength of aluminium stiffened panels. Previously, different benchmarks were presented by ISSC committees on ul...

  8. Sensitivity and specificity of coherence and phase synchronization analysis

    International Nuclear Information System (INIS)

    Winterhalder, Matthias; Schelter, Bjoern; Kurths, Juergen; Schulze-Bonhage, Andreas; Timmer, Jens

    2006-01-01

    In this Letter, we show that coherence and phase synchronization analysis are sensitive but not specific in detecting the correct class of underlying dynamics. We propose procedures to increase specificity and demonstrate the power of the approach by application to paradigmatic dynamic model systems

  9. Sensitivity Analysis of Structures by Virtual Distortion Method

    DEFF Research Database (Denmark)

    Gierlinski, J.T.; Holnicki-Szulc, J.; Sørensen, John Dalsgaard

    1991-01-01

    are used in structural optimization, see Haftka [4]. The recently developed Virtual Distortion Method (VDM) is a numerical technique which offers an efficient approach to calculation of the sensitivity derivatives. This method has been orginally applied to structural remodelling and collapse analysis, see...

  10. Design tradeoff studies and sensitivity analysis. Appendix B

    Energy Technology Data Exchange (ETDEWEB)

    1979-05-25

    The results of the design trade-off studies and the sensitivity analysis of Phase I of the Near Term Hybrid Vehicle (NTHV) Program are presented. The effects of variations in the design of the vehicle body, propulsion systems, and other components on vehicle power, weight, cost, and fuel economy and an optimized hybrid vehicle design are discussed. (LCL)

  11. Sensitivity analysis and power for instrumental variable studies.

    Science.gov (United States)

    Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S

    2018-03-31

    In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.

  12. Parametric Covariance Model for Horizon-Based Optical Navigation

    Science.gov (United States)

    Hikes, Jacob; Liounis, Andrew J.; Christian, John A.

    2016-01-01

    This Note presents an entirely parametric version of the covariance for horizon-based optical navigation measurements. The covariance can be written as a function of only the spacecraft position, two sensor design parameters, the illumination direction, the size of the observed planet, the size of the lit arc to be used, and the total number of observed horizon points. As a result, one may now more clearly understand the sensitivity of horizon-based optical navigation performance as a function of these key design parameters, which is insight that was obscured in previous (and nonparametric) versions of the covariance. Finally, the new parametric covariance is shown to agree with both the nonparametric analytic covariance and results from a Monte Carlo analysis.

  13. Sensitivity analysis of LOFT L2-5 test calculations

    International Nuclear Information System (INIS)

    Prosek, Andrej

    2014-01-01

    The uncertainty quantification of best-estimate code predictions is typically accompanied by a sensitivity analysis, in which the influence of the individual contributors to uncertainty is determined. The objective of this study is to demonstrate the improved fast Fourier transform based method by signal mirroring (FFTBM-SM) for the sensitivity analysis. The sensitivity study was performed for the LOFT L2-5 test, which simulates the large break loss of coolant accident. There were 14 participants in the BEMUSE (Best Estimate Methods-Uncertainty and Sensitivity Evaluation) programme, each performing a reference calculation and 15 sensitivity runs of the LOFT L2-5 test. The important input parameters varied were break area, gap conductivity, fuel conductivity, decay power etc. For the influence of input parameters on the calculated results the FFTBM-SM was used. The only difference between FFTBM-SM and original FFTBM is that in the FFTBM-SM the signals are symmetrized to eliminate the edge effect (the so called edge is the difference between the first and last data point of one period of the signal) in calculating average amplitude. It is very important to eliminate unphysical contribution to the average amplitude, which is used as a figure of merit for input parameter influence on output parameters. The idea is to use reference calculation as 'experimental signal', 'sensitivity run' as 'calculated signal', and average amplitude as figure of merit for sensitivity instead for code accuracy. The larger is the average amplitude the larger is the influence of varied input parameter. The results show that with FFTBM-SM the analyst can get good picture of the contribution of the parameter variation to the results. They show when the input parameters are influential and how big is this influence. FFTBM-SM could be also used to quantify the influence of several parameter variations on the results. However, the influential parameters could not be

  14. Motivations of parametric studies

    International Nuclear Information System (INIS)

    Birac, C.

    1988-01-01

    The paper concerns the motivations of parametric studies in connection with the Programme for the Inspection of Steel Components PISC II. The objective of the PISC II exercise is to evaluate the effectiveness of current and advanced NDT techniques for inspection of reactor pressure vessel components. The parametric studies were initiated to determine the influence of some parameters on defect detection and dimensioning, and to increase the technical bases of the Round Robin Tests. A description is given of the content of the parametric studies including:- the effect of the defects' characteristics, the effect of equipment characteristics, the effect of cladding, and possible use of electromagnetic techniques. (U.K.)

  15. Sensitivity/uncertainty analysis of a borehole scenario comparing Latin Hypercube Sampling and deterministic sensitivity approaches

    International Nuclear Information System (INIS)

    Harper, W.V.; Gupta, S.K.

    1983-10-01

    A computer code was used to study steady-state flow for a hypothetical borehole scenario. The model consists of three coupled equations with only eight parameters and three dependent variables. This study focused on steady-state flow as the performance measure of interest. Two different approaches to sensitivity/uncertainty analysis were used on this code. One approach, based on Latin Hypercube Sampling (LHS), is a statistical sampling method, whereas, the second approach is based on the deterministic evaluation of sensitivities. The LHS technique is easy to apply and should work well for codes with a moderate number of parameters. Of deterministic techniques, the direct method is preferred when there are many performance measures of interest and a moderate number of parameters. The adjoint method is recommended when there are a limited number of performance measures and an unlimited number of parameters. This unlimited number of parameters capability can be extremely useful for finite element or finite difference codes with a large number of grid blocks. The Office of Nuclear Waste Isolation will use the technique most appropriate for an individual situation. For example, the adjoint method may be used to reduce the scope to a size that can be readily handled by a technique such as LHS. Other techniques for sensitivity/uncertainty analysis, e.g., kriging followed by conditional simulation, will be used also. 15 references, 4 figures, 9 tables

  16. Sensitivity and uncertainty analysis of NET/ITER shielding blankets

    International Nuclear Information System (INIS)

    Hogenbirk, A.; Gruppelaar, H.; Verschuur, K.A.

    1990-09-01

    Results are presented of sensitivity and uncertainty calculations based upon the European fusion file (EFF-1). The effect of uncertainties in Fe, Cr and Ni cross sections on the nuclear heating in the coils of a NET/ITER shielding blanket has been studied. The analysis has been performed for the total cross section as well as partial cross sections. The correct expression for the sensitivity profile was used, including the gain term. The resulting uncertainty in the nuclear heating lies between 10 and 20 per cent. (author). 18 refs.; 2 figs.; 2 tabs

  17. Sensitivity analysis of critical experiments with evaluated nuclear data libraries

    International Nuclear Information System (INIS)

    Fujiwara, D.; Kosaka, S.

    2008-01-01

    Criticality benchmark testing was performed with evaluated nuclear data libraries for thermal, low-enriched uranium fuel rod applications. C/E values for k eff were calculated with the continuous-energy Monte Carlo code MVP2 and its libraries generated from Endf/B-VI.8, Endf/B-VII.0, JENDL-3.3 and JEFF-3.1. Subsequently, the observed k eff discrepancies between libraries were decomposed to specify the source of difference in the nuclear data libraries using sensitivity analysis technique. The obtained sensitivity profiles are also utilized to estimate the adequacy of cold critical experiments to the boiling water reactor under hot operating condition. (authors)

  18. Assessment of bioethanol yield by S. cerevisiae grown on oil palm residues: Monte Carlo simulation and sensitivity analysis.

    Science.gov (United States)

    Samsudin, Mohd Dinie Muhaimin; Mat Don, Mashitah

    2015-01-01

    Oil palm trunk (OPT) sap was utilized for growth and bioethanol production by Saccharomycescerevisiae with addition of palm oil mill effluent (POME) as nutrients supplier. Maximum yield (YP/S) was attained at 0.464g bioethanol/g glucose presence in the OPT sap-POME-based media. However, OPT sap and POME are heterogeneous in properties and fermentation performance might change if it is repeated. Contribution of parametric uncertainty analysis on bioethanol fermentation performance was then assessed using Monte Carlo simulation (stochastic variable) to determine probability distributions due to fluctuation and variation of kinetic model parameters. Results showed that based on 100,000 samples tested, the yield (YP/S) ranged 0.423-0.501g/g. Sensitivity analysis was also done to evaluate the impact of each kinetic parameter on the fermentation performance. It is found that bioethanol fermentation highly depend on growth of the tested yeast. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Parametric CFD Analysis to Study the Influence of Fin Geometry on the Performance of a Fin and Tube Heat Exchanger

    DEFF Research Database (Denmark)

    Singh, Shobhana; Sørensen, Kim; Condra, Thomas Joseph

    2016-01-01

    Heat transfer and pressure loss characteristics of a fin and tube heat exchanger are numerically investigated based on parametric fin geometry. The cross-flow type heat exchanger with circular tubes and rectangular fin profile is selected as a reference design. The fin geometry is varied using...... a design aspect ratio as a variable parameter in a range of 0.1-1.0 to predict the impact on overall performance of the heat exchanger. In this paper, geometric profiles with a constant thickness of fin base are studied. Three-dimensional, steady state CFD model is developed using commercially available...... are determined. The best performed geometric fin profile based on the higher heat transfer and lower pressure loss is predicted. The study provides insights into the impact of fin geometry on the heat transfer performance which help escalate the understanding of heat exchanger designing and manufacturing...

  20. Parametric analysis of the operation of nocturnal radiative cooling panels coupled with in room PCM ceiling panels

    DEFF Research Database (Denmark)

    Bourdakis, Eleftherios; Kazanci, Ongun Berk; Péan, T.Q.

    2017-01-01

    03:00 and get activated when the temperature in the storage tank was below 21°C, 69.8°F, activate the heat pump no earlier than 05:00 and get activated when the temperature in the storage tank was below 15°C, 59°F, and lastly have a temperature difference between the output of the solar panels......The scope of this parametric simulation study was to identify the optimal combination of set-points for different parameters of a radiant PCM ceiling panels cooling system that will result in the best indoor thermal environment with the least possible energy use. The results showed that for each...