WorldWideScience

Sample records for applied sensitivity analysis

  1. Applying DEA sensitivity analysis to efficiency measurement of Vietnamese universities

    Directory of Open Access Journals (Sweden)

    Thi Thanh Huyen Nguyen

    2015-11-01

    Full Text Available The primary purpose of this study is to measure the technical efficiency of 30 doctorate-granting universities, the universities or the higher education institutes with PhD training programs, in Vietnam, applying the sensitivity analysis of data envelopment analysis (DEA. The study uses eight sets of input-output specifications using the replacement as well as aggregation/disaggregation of variables. The measurement results allow us to examine the sensitivity of the efficiency of these universities with the sets of variables. The findings also show the impact of variables on their efficiency and its “sustainability”.

  2. Sensitivity Analysis Applied in Design of Low Energy Office Building

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik

    2008-01-01

    Building performance can be expressed by different indicators as primary energy use, environmental load and/or the indoor environmental quality and a building performance simulation can provide the decision maker with a quantitative measure of the extent to which an integrated design solution...... satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...... to identify the most important parameters in relation to building performance and to focus design and optimization of sustainable buildings on these fewer, but most important parameters. The sensitivity analyses will typically be performed at a reasonably early stage of the building design process, where...

  3. System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO

    Science.gov (United States)

    Olds, John R.

    1994-01-01

    This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.

  4. Sensitivity analysis applied to the construction of radial basis function networks.

    Science.gov (United States)

    Shi, D; Yeung, D S; Gao, J

    2005-09-01

    Conventionally, a radial basis function (RBF) network is constructed by obtaining cluster centers of basis function by maximum likelihood learning. This paper proposes a novel learning algorithm for the construction of radial basis function using sensitivity analysis. In training, the number of hidden neurons and the centers of their radial basis functions are determined by the maximization of the output's sensitivity to the training data. In classification, the minimal number of such hidden neurons with the maximal sensitivity will be the most generalizable to unknown data. Our experimental results show that our proposed sensitivity-based RBF classifier outperforms the conventional RBFs and is as accurate as support vector machine (SVM). Hence, sensitivity analysis is expected to be a new alternative way to the construction of RBF networks.

  5. Global sensitivity analysis applied to drying models for one or a population of granules

    DEFF Research Database (Denmark)

    Mortier, Severine Therese F. C.; Gernaey, Krist; Thomas, De Beer;

    2014-01-01

    compared to our earlier work. beta(2) was found to be the most important factor for the single particle model which is useful information when performing model calibration. For the PBM-model, the granule radius and gas temperature were found to be most sensitive. The former indicates that granulator......The development of mechanistic models for pharmaceutical processes is of increasing importance due to a noticeable shift toward continuous production in the industry. Sensitivity analysis is a powerful tool during the model building process. A global sensitivity analysis (GSA), exploring...... sensitivity in a broad parameter space, is performed to detect the most sensitive factors in two models, that is, one for drying of a single granule and one for the drying of a population of granules [using population balance model (PBM)], which was extended by including the gas velocity as extra input...

  6. Sensitivity analysis

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...

  7. Parameters sensitivity analysis for a~crop growth model applied to winter wheat in the Huanghuaihai Plain in China

    Science.gov (United States)

    Liu, M.; He, B.; Lü, A.; Zhou, L.; Wu, J.

    2014-06-01

    Parameters sensitivity analysis is a crucial step in effective model calibration. It quantitatively apportions the variation of model output to different sources of variation, and identifies how "sensitive" a model is to changes in the values of model parameters. Through calibration of parameters that are sensitive to model outputs, parameter estimation becomes more efficient. Due to uncertainties associated with yield estimates in a regional assessment, field-based models that perform well at field scale are not accurate enough to model at regional scale. Conducting parameters sensitivity analysis at the regional scale and analyzing the differences of parameter sensitivity between stations would make model calibration and validation in different sub-regions more efficient. Further, it would benefit the model applied to the regional scale. Through simulating 2000 × 22 samples for 10 stations in the Huanghuaihai Plain, this study discovered that TB (Optimal temperature), HI (Normal harvest index), WA (Potential radiation use efficiency), BN2 (Normal fraction of N in crop biomass at mid-season) and RWPC1 (Fraction of root weight at emergency) are more sensitive than other parameters. Parameters that determine nutrition supplement and LAI development have higher global sensitivity indices than first-order indices. For spatial application, soil diversity is crucial because soil is responsible for crop parameters sensitivity index differences between sites.

  8. Emulation and Sobol' sensitivity analysis of an atmospheric dispersion model applied to the Fukushima nuclear accident

    Science.gov (United States)

    Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne

    2016-04-01

    Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.

  9. Applied analysis

    CERN Document Server

    Lanczos, Cornelius

    2010-01-01

    Basic text for graduate and advanced undergraduate deals with search for roots of algebraic equations encountered in vibration and flutter problems and in those of static and dynamic stability. Other topics devoted to matrices and eigenvalue problems, large-scale linear systems, harmonic analysis and data analysis, more.

  10. Applying Recursive Sensitivity Analysis to Multi-Criteria Decision Models to Reduce Bias in Defense Cyber Engineering Analysis

    Science.gov (United States)

    2015-10-28

    Considerable research has been conducted on the topic of decision aiding methods such as Multi -criteria and Multi - objective Decision Analysis to...the best compromise solution amongst multiple or infinite possibilities. This is generally known as Multi - Objective Decision Making (MODM). For the...from those of the program manager, resource sponsor, or even the user. This research focuses on the use of recursive sensitivity analysis to mitigate

  11. Comprehensive Mechanisms for Combustion Chemistry: An Experimental and Numerical Study with Emphasis on Applied Sensitivity Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dryer, Frederick L.

    2009-04-10

    This project was an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work were conducted in large-diameter flow reactors, at 0.3 to 18 atm pressure, 500 to 1100 K temperature, and 10-2 to 2 seconds reaction time. Experiments were also conducted to determine reference laminar flame speeds using a premixed laminar stagnation flame experiment and particle image velocimetry, as well as pressurized bomb experiments. Flow reactor data for oxidation experiments include: (1)adiabatic/isothermal species time-histories of a reaction under fixed initial pressure, temperature, and composition; to determine the species present after a fixed reaction time, initial pressure; (2)species distributions with varying initial reaction temperature; (3)perturbations of a well-defined reaction systems (e.g. CO/H2/O2 or H2/O2)by the addition of small amounts of an additive species. Radical scavenging techniques are applied to determine unimolecular decomposition rates from pyrolysis experiments. Laminar flame speed measurements are determined as a function of equivalence ratio, dilution, and unburned gas temperature at 1 atm pressure. Hierarchical, comprehensive mechanistic construction methods were applied to develop detailed kinetic mechanisms which describe the measurements and literature kinetic data. Modeling using well-defined and validated mechanisms for the CO/H2/Oxidant systems and perturbations of oxidation experiments by small amounts of additives were also used to derive absolute reaction rates and to investigate the compatibility of published elementary kinetic and thermochemical information. Numerical tools were developed and applied to assess the importance of individual elementary reactions to the predictive performance of the

  12. Sensitivity analysis of six soil organic matter models applied to the decomposition of animal manures and crop residues

    Directory of Open Access Journals (Sweden)

    Daniele Cavalli

    2016-09-01

    Full Text Available Two features distinguishing soil organic matter simulation models are the type of kinetics used to calculate pool decomposition rates, and the algorithm used to handle the effects of nitrogen (N shortage on carbon (C decomposition. Compared to widely used first-order kinetics, Monod kinetics more realistically represent organic matter decomposition, because they relate decomposition to both substrate and decomposer size. Most models impose a fixed C to N ratio for microbial biomass. When N required by microbial biomass to decompose a given amount of substrate-C is larger than soil available N, carbon decomposition rates are limited proportionally to N deficit (N inhibition hypothesis. Alternatively, C-overflow was proposed as a way of getting rid of excess C, by allocating it to a storage pool of polysaccharides. We built six models to compare the combinations of three decomposition kinetics (first-order, Monod, and reverse Monod, and two ways to simulate the effect of N shortage on C decomposition (N inhibition and C-overflow. We conducted sensitivity analysis to identify model parameters that mostly affected CO2 emissions and soil mineral N during a simulated 189-day laboratory incubation assuming constant water content and temperature. We evaluated model outputs sensitivity at different stages of organic matter decomposition in a soil amended with three inputs of increasing C to N ratio: liquid manure, solid manure, and low-N crop residue. Only few model parameters and their interactions were responsible for consistent variations of CO2 and soil mineral N. These parameters were mostly related to microbial biomass and to the partitioning of applied C among input pools, as well as their decomposition constants. In addition, in models with Monod kinetics, CO2 was also sensitive to a variation of the half-saturation constants. C-overflow enhanced pool decomposition compared to N inhibition hypothesis when N shortage occurred. Accumulated C in the

  13. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...

  14. Applied longitudinal analysis

    CERN Document Server

    Fitzmaurice, Garrett M; Ware, James H

    2012-01-01

    Praise for the First Edition "". . . [this book] should be on the shelf of everyone interested in . . . longitudinal data analysis.""-Journal of the American Statistical Association   Features newly developed topics and applications of the analysis of longitudinal data Applied Longitudinal Analysis, Second Edition presents modern methods for analyzing data from longitudinal studies and now features the latest state-of-the-art techniques. The book emphasizes practical, rather than theoretical, aspects of methods for the analysis of diverse types of lo

  15. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...... symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment....

  16. Applied multivariate statistical analysis

    CERN Document Server

    Härdle, Wolfgang Karl

    2015-01-01

    Focusing on high-dimensional applications, this 4th edition presents the tools and concepts used in multivariate data analysis in a style that is also accessible for non-mathematicians and practitioners.  It surveys the basic principles and emphasizes both exploratory and inferential statistics; a new chapter on Variable Selection (Lasso, SCAD and Elastic Net) has also been added.  All chapters include practical exercises that highlight applications in different multivariate data analysis fields: in quantitative financial studies, where the joint dynamics of assets are observed; in medicine, where recorded observations of subjects in different locations form the basis for reliable diagnoses and medication; and in quantitative marketing, where consumers’ preferences are collected in order to construct models of consumer behavior.  All of these examples involve high to ultra-high dimensions and represent a number of major fields in big data analysis. The fourth edition of this book on Applied Multivariate ...

  17. Sensitivity and uncertainty analysis

    CERN Document Server

    Cacuci, Dan G; Navon, Ionel Michael

    2005-01-01

    As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c

  18. Integrated Sensitivity Analysis Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Friedman-Hill, Ernest J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hoffman, Edward L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gibson, Marcus J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Clay, Robert L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-08-01

    Sensitivity analysis is a crucial element of rigorous engineering analysis, but performing such an analysis on a complex model is difficult and time consuming. The mission of the DART Workbench team at Sandia National Laboratories is to lower the barriers to adoption of advanced analysis tools through software integration. The integrated environment guides the engineer in the use of these integrated tools and greatly reduces the cycle time for engineering analysis.

  19. Applied functional analysis

    CERN Document Server

    Griffel, DH

    2002-01-01

    A stimulating introductory text, this volume examines many important applications of functional analysis to mechanics, fluid mechanics, diffusive growth, and approximation. Detailed enough to impart a thorough understanding, the text is also sufficiently straightforward for those unfamiliar with abstract analysis. Its four-part treatment begins with distribution theory and discussions of Green's functions. Essentially independent of the preceding material, the second and third parts deal with Banach spaces, Hilbert space, spectral theory, and variational techniques. The final part outlines the

  20. Applied functional analysis

    CERN Document Server

    Oden, J Tinsley

    2010-01-01

    The textbook is designed to drive a crash course for beginning graduate students majoring in something besides mathematics, introducing mathematical foundations that lead to classical results in functional analysis. More specifically, Oden and Demkowicz want to prepare students to learn the variational theory of partial differential equations, distributions, and Sobolev spaces and numerical analysis with an emphasis on finite element methods. The 1996 first edition has been used in a rather intensive two-semester course. -Book News, June 2010

  1. Sensitivity analysis of SPURR

    Energy Technology Data Exchange (ETDEWEB)

    Witholder, R.E.

    1980-04-01

    The Solar Energy Research Institute has conducted a limited sensitivity analysis on a System for Projecting the Utilization of Renewable Resources (SPURR). The study utilized the Domestic Policy Review scenario for SPURR agricultural and industrial process heat and utility market sectors. This sensitivity analysis determines whether variations in solar system capital cost, operation and maintenance cost, and fuel cost (biomass only) correlate with intuitive expectations. The results of this effort contribute to a much larger issue: validation of SPURR. Such a study has practical applications for engineering improvements in solar technologies and is useful as a planning tool in the R and D allocation process.

  2. Sensitivity and specificity analysis of fringing-field dielectric spectroscopy applied to a multi-layer system modelling the human skin

    Science.gov (United States)

    Huclova, Sonja; Baumann, Dirk; Talary, Mark S.; Fröhlich, Jürg

    2011-12-01

    The sensitivity and specificity of dielectric spectroscopy for the detection of dielectric changes inside a multi-layered structure is investigated. We focus on providing a base for sensing physiological changes in the human skin, i.e. in the epidermal and dermal layers. The correlation between changes of the human skin's effective permittivity and changes of dielectric parameters and layer thickness of the epidermal and dermal layers is assessed using numerical simulations. Numerical models include fringing-field probes placed directly on a multi-layer model of the skin. The resulting dielectric spectra in the range from 100 kHz up to 100 MHz for different layer parameters and sensor geometries are used for a sensitivity and specificity analysis of this multi-layer system. First, employing a coaxial probe, a sensitivity analysis is performed for specific variations of the parameters of the epidermal and dermal layers. Second, the specificity of this system is analysed based on the roots and corresponding sign changes of the computed dielectric spectra and their first and second derivatives. The transferability of the derived results is shown by a comparison of the dielectric spectra of a coplanar probe and a scaled coaxial probe. Additionally, a comparison of the sensitivity of a coaxial probe and an interdigitated probe as a function of electrode distance is performed. It is found that the sensitivity for detecting changes of dielectric properties in the epidermal and dermal layers strongly depends on frequency. Based on an analysis of the dielectric spectra, changes in the effective dielectric parameters can theoretically be uniquely assigned to specific changes in permittivity and conductivity. However, in practice, measurement uncertainties may degrade the performance of the system.

  3. [Highly sensitive detection technology for biological toxins applying sugar epitopes].

    Science.gov (United States)

    Uzawa, Hirotaka

    2009-01-01

    The Shiga toxin is a highly poisonous protein produced by enterohemorrhagic Escherichia coli O157. This bacterial toxin causes the hemolytic uremic syndrome. Another plant toxin from castor beans, ricin, is also highly toxic. The toxin was used for assassination in London. Recently, there were several cases of postal matter containing ricin. Both toxins are categorized as biological warfare agents by the Centers of Disease Control and Prevention. Conventional detection methods based on the antigen-antibody reaction, PCR and other cell-free assays have been proposed. However, those approaches have drawbacks in terms of sensitivity, analytical time, or stability of the detection reagents. Therefore, development of a facile and sensitive detection method is essential. Here we describe new detection methods applying carbohydrate epitopes as the toxin ligands, which is based on the fact that the toxins bind cell-surface oligosaccharides. Namely, the Shiga toxin has an affinity for globobiosyl (Gb(2)) disaccharide, and ricin binds the beta-D-galactose residue. For Shiga toxin detection, surface plasmon resonance (SPR) was applied. A polyanionic Gb(2)-glycopolymer was designed for this purpose, and it was used for the assembly of Gb(2)-chips using alternating layer-by-layer technology. The method allowed us to detect the toxin at a low concentration of LD(50). A synthetic carbohydrate ligand for ricin was designed and immobilized on the chips. SPR analysis with the chips allows us to detect ricin in a highly sensitive and facile manner (10 pg/ml, 5 min). Our present approaches provide a highly effective way to counter bioterrorism.

  4. Applying an energy balance model of a debris covered glacier through the Himalayan seasons - insights from the field and sensitivity analysis

    Science.gov (United States)

    Steiner, Jakob; Pellicciotti, Francesca; Buri, Pascal; Brock, Ben

    2016-04-01

    Although some recent studies have attempted to model melt below debris cover in the Himalaya as well as the European Alps, field measurements remain rare and uncertainties of a number of parameters are difficult to constrain. The difficulty of accurately measuring sub-debris melt at one location over a longer period of time with stakes adds to the challenge of calibrating models adequately, as moving debris tends to tilt stakes. Based on measurements of sub-debris melt with stakes as well as air and surface temperature at the same location during three years from 2012 to 2014 at Lirung Glacier in the Nepalese Himalaya, we investigate results with the help of an earlier developed energy balance model. We compare stake readings to cumulative melt as well as observed to modelled surface temperatures. With timeseries stretching through the pre-Monsoon, Monsoon and post-Monsoon of different years we can show the difference of sensitive parameters during these seasons. Using radiation measurements from the AWS we can use a temporarily variable time series of albedo. A thorough analysis of thermistor data showing the stratigraphy of the temperature through the debris layer allows a detailed discussion of the variability as well as the uncertainty range of thermal conductivity. Distributed wind data as well as results from a distributed surface roughness assessment allows to constrain variability of turbulent fluxes between the different locations of the stakes. We show that model results are especially sensitive to thermal conductivity, a value that changes substantially between the seasons. Values obtained from the field are compared to earlier studies, which shows large differences within locations in the Himalaya. We also show that wind varies with more than a factor two between depressions and on debris mounds which has a significant influence on turbulent fluxes. Albedo decreases from the dry to the wet season and likely has some spatial variability that is

  5. Conversation Analysis in Applied Linguistics

    DEFF Research Database (Denmark)

    Kasper, Gabriele; Wagner, Johannes

    2014-01-01

    with understanding fundamental issues of talk in action and of intersubjectivity in human conduct. The field has expanded its scope from the analysis of talk—often phone calls—towards an integration of language with other semiotic resources for embodied action, including space and objects. Much of this expansion has...... been driven by applied work. After laying out CA's standard practices of data treatment and analysis, this article takes up the role of comparison as a fundamental analytical strategy and reviews recent developments into cross-linguistic and cross-cultural directions. The remaining article focuses......For the last decade, conversation analysis (CA) has increasingly contributed to several established fields in applied linguistics. In this article, we will discuss its methodological contributions. The article distinguishes between basic and applied CA. Basic CA is a sociological endeavor concerned...

  6. Applied analysis and differential equations

    CERN Document Server

    Cârj, Ovidiu

    2007-01-01

    This volume contains refereed research articles written by experts in the field of applied analysis, differential equations and related topics. Well-known leading mathematicians worldwide and prominent young scientists cover a diverse range of topics, including the most exciting recent developments. A broad range of topics of recent interest are treated: existence, uniqueness, viability, asymptotic stability, viscosity solutions, controllability and numerical analysis for ODE, PDE and stochastic equations. The scope of the book is wide, ranging from pure mathematics to various applied fields such as classical mechanics, biomedicine, and population dynamics.

  7. Sensitivity analysis of non-point sources in a water quality model applied to a dammed low-flow-reach river.

    Science.gov (United States)

    Silva, Nayana G M; von Sperling, Marcos

    2008-01-01

    Downstream of Capim Branco I hydroelectric dam (Minas Gerais state, Brazil), there is the need of keeping a minimum flow of 7 m3/s. This low flow reach (LFR) has a length of 9 km. In order to raise the water level in the low flow reach, the construction of intermediate dikes along the river bed was decided. The LFR has a tributary that receives the discharge of treated wastewater. As part of this study, water quality of the low-flow reach was modelled, in order to gain insight into its possible behaviour under different scenarios (without and with intermediate dikes). QUAL2E equations were implemented in FORTRAN code. The model takes into account point-source pollution and diffuse pollution. Uncertainty analysis was performed, presenting probabilistic results and allowing identification of the more important coefficients in the LFR water-quality model. The simulated results indicate, in general, very good conditions for most of the water quality parameters The variables of more influence found in the sensitivity analysis were the conversion coefficients (without and with dikes), the initial conditions in the reach (without dikes), the non-point incremental contributions (without dikes) and the hydraulic characteristics of the reach (with dikes).

  8. Applied survival analysis using R

    CERN Document Server

    Moore, Dirk F

    2016-01-01

    Applied Survival Analysis Using R covers the main principles of survival analysis, gives examples of how it is applied, and teaches how to put those principles to use to analyze data using R as a vehicle. Survival data, where the primary outcome is time to a specific event, arise in many areas of biomedical research, including clinical trials, epidemiological studies, and studies of animals. Many survival methods are extensions of techniques used in linear regression and categorical data, while other aspects of this field are unique to survival data. This text employs numerous actual examples to illustrate survival curve estimation, comparison of survivals of different groups, proper accounting for censoring and truncation, model variable selection, and residual analysis. Because explaining survival analysis requires more advanced mathematics than many other statistical topics, this book is organized with basic concepts and most frequently used procedures covered in earlier chapters, with more advanced topics...

  9. Economic modeling and sensitivity analysis.

    Science.gov (United States)

    Hay, J W

    1998-09-01

    The field of pharmacoeconomics (PE) faces serious concerns of research credibility and bias. The failure of researchers to reproduce similar results in similar settings, the inappropriate use of clinical data in economic models, the lack of transparency, and the inability of readers to make meaningful comparisons across published studies have greatly contributed to skepticism about the validity, reliability, and relevance of these studies to healthcare decision-makers. Using a case study in the field of lipid PE, two suggestions are presented for generally applicable reporting standards that will improve the credibility of PE. Health economists and researchers should be expected to provide either the software used to create their PE model or a multivariate sensitivity analysis of their PE model. Software distribution would allow other users to validate the assumptions and calculations of a particular model and apply it to their own circumstances. Multivariate sensitivity analysis can also be used to present results in a consistent and meaningful way that will facilitate comparisons across the PE literature. Using these methods, broader acceptance and application of PE results by policy-makers would become possible. To reduce the uncertainty about what is being accomplished with PE studies, it is recommended that these guidelines become requirements of both scientific journals and healthcare plan decision-makers. The standardization of economic modeling in this manner will increase the acceptability of pharmacoeconomics as a practical, real-world science.

  10. Size-specific sensitivity: Applying a new structured population model

    Energy Technology Data Exchange (ETDEWEB)

    Easterling, M.R.; Ellner, S.P.; Dixon, P.M.

    2000-03-01

    Matrix population models require the population to be divided into discrete stage classes. In many cases, especially when classes are defined by a continuous variable, such as length or mass, there are no natural breakpoints, and the division is artificial. The authors introduce the integral projection model, which eliminates the need for division into discrete classes, without requiring any additional biological assumptions. Like a traditional matrix model, the integral projection model provides estimates of the asymptotic growth rate, stable size distribution, reproductive values, and sensitivities of the growth rate to changes in vital rates. However, where the matrix model represents the size distributions, reproductive value, and sensitivities as step functions (constant within a stage class), the integral projection model yields smooth curves for each of these as a function of individual size. The authors describe a method for fitting the model to data, and they apply this method to data on an endangered plant species, northern monkshood (Aconitum noveboracense), with individuals classified by stem diameter. The matrix and integral models yield similar estimates of the asymptotic growth rate, but the reproductive values and sensitivities in the matrix model are sensitive to the choice of stage classes. The integral projection model avoids this problem and yields size-specific sensitivities that are not affected by stage duration. These general properties of the integral projection model will make it advantageous for other populations where there is no natural division of individuals into stage classes.

  11. Sensitivity analysis and related analysis : A survey of statistical techniques

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical

  12. Vibration Sensitive Keystroke Analysis

    NARCIS (Netherlands)

    Lopatka, M.; Peetz, M.-H.; van Erp, M.; Stehouwer, H.; van Zaanen, M.

    2009-01-01

    We present a novel method for performing non-invasive biometric analysis on habitual keystroke patterns using a vibration-based feature space. With the increasing availability of 3-D accelerometer chips in laptop computers, conventional methods using time vectors may be augmented using a distinct fe

  13. Polarization-sensitive optical coherence tomography applied to intervertebral disk

    Science.gov (United States)

    Matcher, Stephen J.; Winlove, Peter; Gangnus, Sergei V.

    2003-07-01

    Polarization-sensitive optical coherence tomography (PSOCT) is a powerful new optical imaging modality that is sensitive to the birefringence properties of tissues. It thus has potential applications in studying the large-scale ordering of collagen fibers within connective tisues and changes related to pathology. As a tissue for study by PSOCT, intervertebral disk respresents an interesting system as the collagen organization is believed to show pronounced variations with depth, on a spatial scale of about 100 μm. We have used a polarization-sensitive optical coherence tomography system to measure the birefringence properties of bovine caudal intervertebral disk and compared this with equine flexor tendon. The result for equine tendon, δ = (3.0 +/- 0.5)x10-3 at 1.3 μm, is in broad agreement with values reported for bovine tendon, while bovine intervertebral disk displays a birefringence of about half this, δ = 1.2 x 10-3 at 1.3 μm. While tendon appears to show a uniform fast-axis over 0.8 mm depth, intervertebral disk shows image contrast at all orientations relative to a linearly polarized input beam, suggesting a variation in fast-axis orientation with depth. These initial results suggest that PSOCT could be a useful tool to study collagen organization within this tissue and its variation with applied load and disease.

  14. Conversation Analysis and Applied Linguistics.

    Science.gov (United States)

    Schegloff, Emanuel A.; Koshik, Irene; Jacoby, Sally; Olsher, David

    2002-01-01

    Offers biographical guidance on several major areas of conversation-analytic work--turn-taking, repair, and word selection--and indicates past or potential points of contact with applied linguistics. Also discusses areas of applied linguistic work. (Author/VWL)

  15. Differential sensitivity theory applied to movement of maxima responses. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Maudlin, P.J.; Parks, C.V.; Cacuci, D.G.

    1981-01-01

    Differential sensitivity theory (DST) is a recently developed methodology to evaluate response derivatives dR/d..cap alpha.. by using adjoint functions which correspond to the differentiated (with respect to an arbitrary parameter ..cap alpha..) linear or nonlinear physical system of equations. However, for many problems, where responses of importance are local maxima such as peak temperature, power, or heat flux, changes in the phase space location of the peak itself are of interest. This summary will present the DST procedure for predicting phase space shifts of maxima responses as applied to the MELT-III fast reactor safety code. An FFTF protected transient involving a $.23/s ramp reactivity insertion with scram on high power was selected for investigation.

  16. A sensitivity analysis applied to morphological computations

    NARCIS (Netherlands)

    De Vries, M.

    1985-01-01

    In river engineering morphological predictions have to be made to study the implications of changes in a river system due to natural causes or human interference. It regards here time-depending processes. Characteristic parameters of the river have to be forecasted both in time and space. The morpho

  17. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Haihua Zhao; Vincent A. Mousseau

    2008-09-01

    This report presents the forward sensitivity analysis method as a means for quantification of uncertainty in system analysis. The traditional approach to uncertainty quantification is based on a “black box” approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. This approach requires large number of simulation runs and therefore has high computational cost. Contrary to the “black box” method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In this approach equations for the propagation of uncertainty are constructed and the sensitivity is solved for as variables in the same simulation. This “glass box” method can generate similar sensitivity information as the above “black box” approach with couples of runs to cover a large uncertainty region. Because only small numbers of runs are required, those runs can be done with a high accuracy in space and time ensuring that the uncertainty of the physical model is being measured and not simply the numerical error caused by the coarse discretization. In the forward sensitivity method, the model is differentiated with respect to each parameter to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The sensitivity of any output variable can then be directly obtained from these sensitivities by applying the chain rule of differentiation. We extend the forward sensitivity method to include time and spatial steps as special parameters so that the numerical errors can be quantified against other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty analysis. By knowing the relative sensitivity of time and space steps with other

  18. Eigenfrequency sensitivity analysis of flexible rotors

    Directory of Open Access Journals (Sweden)

    Šašek J.

    2007-10-01

    Full Text Available This paper deals with sensitivity analysis of eigenfrequencies from the viewpoint of design parameters. The sensitivity analysis is applied to a rotor which consists of a shaft and a disk. The design parameters of sensitivity analysis are the disk radius and the disk width. The shaft is modeled as a 1D continuum using shaft finite elements. The disks of rotating systems are commonly modeled as rigid bodies. The presented approach to the disk modeling is based on a 3D flexible continuum discretized using hexahedral finite elements. The both components of the rotor are connected together by special proposed couplings. The whole rotor is modeled in rotating coordinate system with considering rotation influences (gyroscopic and dynamics stiffness matrices.

  19. Applying Winnow to Context-Sensitive Spelling Correction

    CERN Document Server

    Golding, A R; Golding, Andrew R.; Roth, Dan

    1996-01-01

    Multiplicative weight-updating algorithms such as Winnow have been studied extensively in the COLT literature, but only recently have people started to use them in applications. In this paper, we apply a Winnow-based algorithm to a task in natural language: context-sensitive spelling correction. This is the task of fixing spelling errors that happen to result in valid words, such as substituting {\\it to\\/} for {\\it too}, {\\it casual\\/} for {\\it causal}, and so on. Previous approaches to this problem have been statistics-based; we compare Winnow to one of the more successful such approaches, which uses Bayesian classifiers. We find that: (1)~When the standard (heavily-pruned) set of features is used to describe problem instances, Winnow performs comparably to the Bayesian method; (2)~When the full (unpruned) set of features is used, Winnow is able to exploit the new features and convincingly outperform Bayes; and (3)~When a test set is encountered that is dissimilar to the training set, Winnow is better than B...

  20. Applying critical analysis - main methods

    Directory of Open Access Journals (Sweden)

    Miguel Araujo Alonso

    2012-02-01

    Full Text Available What is the usefulness of critical appraisal of literature? Critical analysis is a fundamental condition for the correct interpretation of any study that is subject to review. In epidemiology, in order to learn how to read a publication, we must be able to analyze it critically. Critical analysis allows us to check whether a study fulfills certain previously established methodological inclusion and exclusion criteria. This is frequently used in conducting systematic reviews although eligibility criteria are generally limited to the study design. Critical analysis of literature and be done implicitly while reading an article, as in reading for personal interest, or can be conducted in a structured manner, using explicit and previously established criteria. The latter is done when formally reviewing a topic.

  1. Phantom pain : A sensitivity analysis

    NARCIS (Netherlands)

    Borsje, Susanne; Bosmans, JC; Van der Schans, CP; Geertzen, JHB; Dijkstra, PU

    2004-01-01

    Purpose : To analyse how decisions to dichotomise the frequency and impediment of phantom pain into absent and present influence the outcome of studies by performing a sensitivity analysis on an existing database. Method : Five hundred and thirty-six subjects were recruited from the database of an o

  2. An analysis of sensitivity tests

    Energy Technology Data Exchange (ETDEWEB)

    Neyer, B.T.

    1992-03-06

    A new method of analyzing sensitivity tests is proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions for the parameters of the distribution (e.g., the mean, {mu}, and the standard deviation, {sigma}) as well as various percentiles. Unlike presently used methods, such as those based on asymptotic analysis, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The main disadvantage of this method is that it requires much more computation to calculate the confidence regions. However, these calculations can be easily and quickly performed on most computers.

  3. Essentials of applied dynamic analysis

    CERN Document Server

    Jia, Junbo

    2014-01-01

    This book presents up-to-date knowledge of dynamic analysis in engineering world. To facilitate the understanding of the topics by readers with various backgrounds, general principles are linked to their applications from different angles. Special interesting topics such as statistics of motions and loading, damping modeling and measurement, nonlinear dynamics, fatigue assessment, vibration and buckling under axial loading, structural health monitoring, human body vibrations, and vehicle-structure interactions etc., are also presented. The target readers include industry professionals in civil, marine and mechanical engineering, as well as researchers and students in this area.

  4. A numerical comparison of sensitivity analysis techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hamby, D.M.

    1993-12-31

    Engineering and scientific phenomena are often studied with the aid of mathematical models designed to simulate complex physical processes. In the nuclear industry, modeling the movement and consequence of radioactive pollutants is extremely important for environmental protection and facility control. One of the steps in model development is the determination of the parameters most influential on model results. A {open_quotes}sensitivity analysis{close_quotes} of these parameters is not only critical to model validation but also serves to guide future research. A previous manuscript (Hamby) detailed many of the available methods for conducting sensitivity analyses. The current paper is a comparative assessment of several methods for estimating relative parameter sensitivity. Method practicality is based on calculational ease and usefulness of the results. It is the intent of this report to demonstrate calculational rigor and to compare parameter sensitivity rankings resulting from various sensitivity analysis techniques. An atmospheric tritium dosimetry model (Hamby) is used here as an example, but the techniques described can be applied to many different modeling problems. Other investigators (Rose; Dalrymple and Broyd) present comparisons of sensitivity analyses methodologies, but none as comprehensive as the current work.

  5. Sensitivity analysis in remote sensing

    CERN Document Server

    Ustinov, Eugene A

    2015-01-01

    This book contains a detailed presentation of general principles of sensitivity analysis as well as their applications to sample cases of remote sensing experiments. An emphasis is made on applications of adjoint problems, because they are more efficient in many practical cases, although their formulation may seem counterintuitive to a beginner. Special attention is paid to forward problems based on higher-order partial differential equations, where a novel matrix operator approach to formulation of corresponding adjoint problems is presented. Sensitivity analysis (SA) serves for quantitative models of physical objects the same purpose, as differential calculus does for functions. SA provides derivatives of model output parameters (observables) with respect to input parameters. In remote sensing SA provides computer-efficient means to compute the jacobians, matrices of partial derivatives of observables with respect to the geophysical parameters of interest. The jacobians are used to solve corresponding inver...

  6. Sensitivity analysis and application in exploration geophysics

    Science.gov (United States)

    Tang, R.

    2013-12-01

    In exploration geophysics, the usual way of dealing with geophysical data is to form an Earth model describing underground structure in the area of investigation. The resolved model, however, is based on the inversion of survey data which is unavoidable contaminated by various noises and is sampled in a limited number of observation sites. Furthermore, due to the inherent non-unique weakness of inverse geophysical problem, the result is ambiguous. And it is not clear that which part of model features is well-resolved by the data. Therefore the interpretation of the result is intractable. We applied a sensitivity analysis to address this problem in magnetotelluric(MT). The sensitivity, also named Jacobian matrix or the sensitivity matrix, is comprised of the partial derivatives of the data with respect to the model parameters. In practical inversion, the matrix can be calculated by direct modeling of the theoretical response for the given model perturbation, or by the application of perturbation approach and reciprocity theory. We now acquired visualized sensitivity plot by calculating the sensitivity matrix and the solution is therefore under investigation that the less-resolved part is indicated and should not be considered in interpretation, while the well-resolved parameters can relatively be convincing. The sensitivity analysis is hereby a necessary and helpful tool for increasing the reliability of inverse models. Another main problem of exploration geophysics is about the design strategies of joint geophysical survey, i.e. gravity, magnetic & electromagnetic method. Since geophysical methods are based on the linear or nonlinear relationship between observed data and subsurface parameters, an appropriate design scheme which provides maximum information content within a restricted budget is quite difficult. Here we firstly studied sensitivity of different geophysical methods by mapping the spatial distribution of different survey sensitivity with respect to the

  7. The sensitivity analysis of population projections

    Directory of Open Access Journals (Sweden)

    Hal Caswell

    2015-10-01

    Full Text Available Background: Population projections using the cohort component method can be written as time-varyingmatrix population models. The matrices are parameterized by schedules of mortality, fertility,immigration, and emigration over the duration of the projection. A variety of dependentvariables are routinely calculated (the population vector, various weighted population sizes, dependency ratios, etc. from such projections. Objective: Our goal is to derive and apply theory to compute the sensitivity and the elasticity (proportional sensitivity of any projection outcome to changes in any of the parameters, where those changes are applied at any time during the projection interval. Methods: We use matrix calculus to derive a set of equations for the sensitivity and elasticity of any vector valued outcome ξ(t at time t to any perturbation of a parameter vector Ɵ(s at anytime s. Results: The results appear in the form of a set of dynamic equations for the derivatives that areintegrated in parallel with the dynamic equations for the projection itself. We show resultsfor single-sex projections and for the more detailed case of projections including age distributions for both sexes. We apply the results to a projection of the population of Spain, from 2012 to 2052, prepared by the Instituto Nacional de Estadística, and determine the sensitivity and elasticity of (1 total population, (2 the school-age population, (3 the population subject to dementia, (4 the total dependency ratio, and (5 the economicsupport ratio. Conclusions: Writing population projections in matrix form makes sensitivity analysis possible. Such analyses are a powerful tool for the exploration of how detailed aspects of the projectionoutput are determined by the mortality, fertility, and migration schedules that underlie theprojection.

  8. Concept analysis of culture applied to nursing.

    Science.gov (United States)

    Marzilli, Colleen

    2014-01-01

    Culture is an important concept, especially when applied to nursing. A concept analysis of culture is essential to understanding the meaning of the word. This article applies Rodgers' (2000) concept analysis template and provides a definition of the word culture as it applies to nursing practice. This article supplies examples of the concept of culture to aid the reader in understanding its application to nursing and includes a case study demonstrating components of culture that must be respected and included when providing health care.

  9. Correspondence Analysis applied to psychological research

    OpenAIRE

    Laura Doey; Jessica Kurta

    2011-01-01

    Correspondence analysis is an exploratory data technique used to analyze categorical data (Benzecri, 1992). It is used in many areas such as marketing and ecology. Correspondence analysis has been used less often in psychological research, although it can be suitably applied. This article discusses the benefits of using correspondence analysis in psychological research and provides a tutorial on how to perform correspondence analysis using the Statistical Package for the Social Sciences (SPSS).

  10. Correspondence Analysis applied to psychological research

    Directory of Open Access Journals (Sweden)

    Laura Doey

    2011-04-01

    Full Text Available Correspondence analysis is an exploratory data technique used to analyze categorical data (Benzecri, 1992. It is used in many areas such as marketing and ecology. Correspondence analysis has been used less often in psychological research, although it can be suitably applied. This article discusses the benefits of using correspondence analysis in psychological research and provides a tutorial on how to perform correspondence analysis using the Statistical Package for the Social Sciences (SPSS.

  11. Applied regression analysis a research tool

    CERN Document Server

    Pantula, Sastry; Dickey, David

    1998-01-01

    Least squares estimation, when used appropriately, is a powerful research tool. A deeper understanding of the regression concepts is essential for achieving optimal benefits from a least squares analysis. This book builds on the fundamentals of statistical methods and provides appropriate concepts that will allow a scientist to use least squares as an effective research tool. Applied Regression Analysis is aimed at the scientist who wishes to gain a working knowledge of regression analysis. The basic purpose of this book is to develop an understanding of least squares and related statistical methods without becoming excessively mathematical. It is the outgrowth of more than 30 years of consulting experience with scientists and many years of teaching an applied regression course to graduate students. Applied Regression Analysis serves as an excellent text for a service course on regression for non-statisticians and as a reference for researchers. It also provides a bridge between a two-semester introduction to...

  12. Pressure Sensitive Paint Applied to Flexible Models Project

    Science.gov (United States)

    Schairer, Edward T.; Kushner, Laura Kathryn

    2014-01-01

    One gap in current pressure-measurement technology is a high-spatial-resolution method for accurately measuring pressures on spatially and temporally varying wind-tunnel models such as Inflatable Aerodynamic Decelerators (IADs), parachutes, and sails. Conventional pressure taps only provide sparse measurements at discrete points and are difficult to integrate with the model structure without altering structural properties. Pressure Sensitive Paint (PSP) provides pressure measurements with high spatial resolution, but its use has been limited to rigid or semi-rigid models. Extending the use of PSP from rigid surfaces to flexible surfaces would allow direct, high-spatial-resolution measurements of the unsteady surface pressure distribution. Once developed, this new capability will be combined with existing stereo photogrammetry methods to simultaneously measure the shape of a dynamically deforming model in a wind tunnel. Presented here are the results and methodology for using PSP on flexible surfaces.

  13. Applying WCET Analysis at Architectural Level

    OpenAIRE

    Gilles, Olivier; Hugues, Jérôme

    2008-01-01

    Real-Time embedded systems must enforce strict timing constraints. In this context, achieving precise Worst Case Execution Time is a prerequisite to apply scheduling analysis and verify system viability. WCET analysis is usually a complex and time-consuming activity. It becomes increasingly complex when one also considers code generation strategies from high-level models. In this paper, we present an experiment made on the coupling of the WCET analysis tool Bound-T and our AADL to code ...

  14. Measuring Road Network Vulnerability with Sensitivity Analysis

    Science.gov (United States)

    Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin

    2017-01-01

    This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706

  15. Sensitivity analysis of periodic matrix population models.

    Science.gov (United States)

    Caswell, Hal; Shyu, Esther

    2012-12-01

    Periodic matrix models are frequently used to describe cyclic temporal variation (seasonal or interannual) and to account for the operation of multiple processes (e.g., demography and dispersal) within a single projection interval. In either case, the models take the form of periodic matrix products. The perturbation analysis of periodic models must trace the effects of parameter changes, at each phase of the cycle, on output variables that are calculated over the entire cycle. Here, we apply matrix calculus to obtain the sensitivity and elasticity of scalar-, vector-, or matrix-valued output variables. We apply the method to linear models for periodic environments (including seasonal harvest models), to vec-permutation models in which individuals are classified by multiple criteria, and to nonlinear models including both immediate and delayed density dependence. The results can be used to evaluate management strategies and to study selection gradients in periodic environments.

  16. Positive Behavior Support and Applied Behavior Analysis

    Science.gov (United States)

    Johnston, J. M.; Foxx, R. M.; Jacobson, J. W.; Green, G.; Mulick, J. A.

    2006-01-01

    This article reviews the origins and characteristics of the positive behavior support (PBS) movement and examines those features in the context of the field of applied behavior analysis (ABA). We raise a number of concerns about PBS as an approach to delivery of behavioral services and its impact on how ABA is viewed by those in human services. We…

  17. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    Science.gov (United States)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  18. Sensitivity Analysis of Component Reliability

    Institute of Scientific and Technical Information of China (English)

    ZhenhuaGe

    2004-01-01

    In a system, Every component has its unique position within system and its unique failure characteristics. When a component's reliability is changed, its effect on system reliability is not equal. Component reliability sensitivity is a measure of effect on system reliability while a component's reliability is changed. In this paper, the definition and relative matrix of component reliability sensitivity is proposed, and some of their characteristics are analyzed. All these will help us to analyse or improve the system reliability.

  19. A novel method of sensitivity analysis testing by applying the DRASTIC and fuzzy optimization methods to assess groundwater vulnerability to pollution: the case of the Senegal River basin in Mali

    Science.gov (United States)

    Souleymane, Keita; Zhonghua, Tang

    2017-08-01

    Vulnerability to groundwater pollution in the Senegal River basin was studied by two different but complementary methods: the DRASTIC method (which evaluates the intrinsic vulnerability) and the fuzzy method (which assesses the specific vulnerability by taking into account the continuity of the parameters). The validation of this application has been tested by comparing the connection in groundwater and distribution of different established classes of vulnerabilities as well as the nitrate distribution in the study area. Three vulnerability classes (low, medium and high) have been identified by both the DRASTIC method and the fuzzy method (between which the normalized model was used). An integrated analysis reveals that high classes with 14.64 % (for the DRASTIC method), 21.68 % (for the normalized DRASTIC method) and 18.92 % (for the fuzzy method) are not the most dominant. In addition, a new method for sensitivity analysis was used to identify (and confirm) the main parameters which impact the vulnerability to pollution with fuzzy membership. The results showed that the vadose zone is the main parameter which impacts groundwater vulnerability to pollution while net recharge contributes least to pollution in the study area. It was also found that the fuzzy method better assesses the vulnerability to pollution with a coincidence rate of 81.13 % versus that of 77.35 % for the DRASTIC method. These results serve as a guide for policymakers to identify areas sensitive to pollution before such sites are used for socioeconomic infrastructures.

  20. Applied Data Analysis in Energy Monitoring System

    Directory of Open Access Journals (Sweden)

    Kychkin А.V.

    2016-08-01

    Full Text Available Software and hardware system organization is presented as an example for building energy monitoring of multi-sectional lighting and climate control / conditioning needs. System key feature is applied office energy data analysis that allows to provide each type of hardware localized work mode recognition. It is based on general energy consumption profile with following energy consumption and workload evaluation. Applied data analysis includes primary data processing block, smoothing filter, time stamp identification block, clusterization and classification blocks, state change detection block, statistical data calculation block. Time slot consumed energy value and slot time stamp are taken as work mode classification main parameters. Energy data applied analysis with HIL and OpenJEVis visualization system usage experimental research results for chosen time period has been provided. Energy consumption, workload calculation and eight different states identification has been executed for two lighting sections and one climate control / conditioning emulating system by integral energy consumption profile. Research has been supported by university internal grant №2016/PI-2 «Methodology development of monitoring and heat flow utilization as low potential company energy sources».

  1. Applied surface analysis in magnetic storage technology

    Science.gov (United States)

    Windeln, Johannes; Bram, Christian; Eckes, Heinz-Ludwig; Hammel, Dirk; Huth, Johanna; Marien, Jan; Röhl, Holger; Schug, Christoph; Wahl, Michael; Wienss, Andreas

    2001-07-01

    This paper gives a synopsis of today's challenges and requirements for a surface analysis and materials science laboratory with a special focus on magnetic recording technology. The critical magnetic recording components, i.e. the protective carbon overcoat (COC), the disk layer structure, the read/write head including the giant-magnetoresistive (GMR) sensor, are described and options for their characterization with specific surface and structure analysis techniques are given. For COC investigations, applications of Raman spectroscopy to the structural analysis and determination of thickness, hydrogen and nitrogen content are discussed. Hardness measurements by atomic force microscopy (AFM) scratching techniques are presented. Surface adsorption phenomena on disk substrates or finished disks are characterized by contact angle analysis or so-called piezo-electric mass adsorption systems (PEMAS), also known as quartz crystal microbalance (QCM). A quickly growing field of applications is listed for various X-ray analysis techniques, such as disk magnetic layer texture analysis for X-ray diffraction, compositional characterization via X-ray fluorescence, compositional analysis with high lateral resolution via electron microprobe analysis. X-ray reflectometry (XRR) has become a standard method for the absolute measurement of individual layer thicknesses contained in multi-layer stacks and thus, is the successor of ellipsometry for this application. Due to the ongoing reduction of critical feature sizes, the analytical challenges in terms of lateral resolution, sensitivity limits and dedicated nano-preparation have been consistently growing and can only be met by state-of-the-art Auger electron spectrometers (AES), transmission electron microscopy (TEM) analysis, time-of-flight-secondary ion mass spectroscopy (ToF-SIMS) characterization, focused ion beam (FIB) sectioning and TEM lamella preparation via FIB. The depth profiling of GMR sensor full stacks was significantly

  2. Shape design sensitivity analysis using domain information

    Science.gov (United States)

    Seong, Hwal-Gyeong; Choi, Kyung K.

    1985-01-01

    A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.

  3. Applied research in uncertainty modeling and analysis

    CERN Document Server

    Ayyub, Bilal

    2005-01-01

    Uncertainty has been a concern to engineers, managers, and scientists for many years. For a long time uncertainty has been considered synonymous with random, stochastic, statistic, or probabilistic. Since the early sixties views on uncertainty have become more heterogeneous. In the past forty years numerous tools that model uncertainty, above and beyond statistics, have been proposed by several engineers and scientists. The tool/method to model uncertainty in a specific context should really be chosen by considering the features of the phenomenon under consideration, not independent of what is known about the system and what causes uncertainty. In this fascinating overview of the field, the authors provide broad coverage of uncertainty analysis/modeling and its application. Applied Research in Uncertainty Modeling and Analysis presents the perspectives of various researchers and practitioners on uncertainty analysis and modeling outside their own fields and domain expertise. Rather than focusing explicitly on...

  4. Sensitivity Analysis for Multidisciplinary Systems (SAMS)

    Science.gov (United States)

    2016-12-01

    AFRL-RQ-WP-TM-2017-0017 SENSITIVITY ANALYSIS FOR MULTIDISCIPLINARY SYSTEMS (SAMS) Richard D. Snyder Design & Analysis Branch Aerospace Vehicles...for public release. Distribution is unlimited. 1 AFRL-NASA Collaboration Provide economical, accurate sensitivities for multidisciplinary design and... Concept Refinement Technology Development System Development & Demonstration Production & Deployment Operation & Support • Knowledge is most limited

  5. Object-sensitive Type Analysis of PHP

    NARCIS (Netherlands)

    Van der Hoek, Henk Erik; Hage, J

    2015-01-01

    In this paper we develop an object-sensitive type analysis for PHP, based on an extension of the notion of monotone frameworks to deal with the dynamic aspects of PHP, and following the framework of Smaragdakis et al. for object-sensitive analysis. We consider a number of instantiations of the frame

  6. Sneak analysis applied to process systems

    Science.gov (United States)

    Whetton, Cris

    Traditional safety analyses, such as HAZOP, FMEA, FTA, and MORT, are less than effective at identifying hazards resulting from incorrect 'flow' - whether this be flow of information, actions, electric current, or even the literal flow of process fluids. Sneak Analysis (SA) has existed since the mid nineteen-seventies as a means of identifying such conditions in electric circuits; in which area, it is usually known as Sneak Circuit Analysis (SCA). This paper extends the ideas of Sneak Circuit Analysis to a general method of Sneak Analysis applied to process plant. The methods of SA attempt to capitalize on previous work in the electrical field by first producing a pseudo-electrical analog of the process and then analyzing the analog by the existing techniques of SCA, supplemented by some additional rules and clues specific to processes. The SA method is not intended to replace any existing method of safety analysis; instead, it is intended to supplement such techniques as HAZOP and FMEA by providing systematic procedures for the identification of a class of potential problems which are not well covered by any other method.

  7. Least Squares Shadowing for Sensitivity Analysis of Turbulent Fluid Flows

    CERN Document Server

    Blonigan, Patrick; Wang, Qiqi

    2014-01-01

    Computational methods for sensitivity analysis are invaluable tools for aerodynamics research and engineering design. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in turbulent fluid flow fields, specifically those obtained using high-fidelity turbulence simulations. This is because of a number of dynamical properties of turbulent and chaotic fluid flows, most importantly high sensitivity of the initial value problem, popularly known as the "butterfly effect". The recently developed least squares shadowing (LSS) method avoids the issues encountered by traditional sensitivity analysis methods by approximating the "shadow trajectory" in phase space, avoiding the high sensitivity of the initial value problem. The following paper discusses how the least squares problem associated with LSS is solved. Two methods are presented and are demonstrated on a simulation of homogeneous isotropic turbulence and the Kuramoto-Sivashinsky (KS) equation, a 4th order c...

  8. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  9. An ESDIRK Method with Sensitivity Analysis Capabilities

    DEFF Research Database (Denmark)

    Kristensen, Morten Rode; Jørgensen, John Bagterp; Thomsen, Per Grove

    2004-01-01

    A new algorithm for numerical sensitivity analysis of ordinary differential equations (ODEs) is presented. The underlying ODE solver belongs to the Runge-Kutta family. The algorithm calculates sensitivities with respect to problem parameters and initial conditions, exploiting the special structure...

  10. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Haihua Zhao; Vincent A. Mousseau

    2011-09-01

    Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical

  11. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the

  12. Exergy analysis applied to biodiesel production

    Energy Technology Data Exchange (ETDEWEB)

    Talens, Laura; Villalba, Gara [SosteniPra UAB-IRTA. Environmental Science and Technology Institute (ICTA), Edifici Cn, Universitat Autonoma de Barcelona (UAB), 08193 Bellaterra, Cerdanyola del Valles, Barcelona (Spain); Gabarrell, Xavier [SosteniPra UAB-IRTA. Environmental Science and Technology Institute ICTA, Edifici Cn, Universitat Autonoma de Barcelona (UAB), 08193 Bellaterra (Cerdanyola del Valles), Barcelona (Spain); Department of Chemical Engineering, Universitat Autonoma de Barcelona (UAB), 08193 Bellaterra Cerdanyola del Valles, Barcelona (Spain)

    2007-08-15

    In our aim to decrease the consumption of materials and energy and promote the use of renewable resources, such as biofuels, rises the need to measure materials and energy fluxes. This paper suggests the use of Exergy Flow Analysis (ExFA) as an environmental assessment tool to account wastes and emissions, determine the exergetic efficiency, compare substitutes and other types of energy sources: all useful in defining environmental and economical policies for resource use. In order to illustrate how ExFA is used, it is applied to the process of biodiesel production. The results show that the production process has a low exergy loss (492 MJ). The exergy loss is reduced by using potassium hydroxide and sulphuric acid as process catalysts and it can be further minimised by improving the quality of the used cooking oil. (author)

  13. Analysis of MEMS Accelerometer for Optimized Sensitivity

    National Research Council Canada - National Science Library

    Khairun Nisa Khamil; Kok Swee Leong; Norizan Bin Mohamad; Norhayati Soin; Norshahida Saba

    2014-01-01

    .... The geometrical of the accelerometer, mass width, beam (length and width) of the device and its sensitivity are analyzed theoretically and also using finite element analysis software, COMSOL Multiphysics...

  14. Sensitivity Analysis Using Simple Additive Weighting Method

    Directory of Open Access Journals (Sweden)

    Wayne S. Goodridge

    2016-05-01

    Full Text Available The output of a multiple criteria decision method often has to be analyzed using some sensitivity analysis technique. The SAW MCDM method is commonly used in management sciences and there is a critical need for a robust approach to sensitivity analysis in the context that uncertain data is often present in decision models. Most of the sensitivity analysis techniques for the SAW method involve Monte Carlo simulation methods on the initial data. These methods are computationally intensive and often require complex software. In this paper, the SAW method is extended to include an objective function which makes it easy to analyze the influence of specific changes in certain criteria values thus making easy to perform sensitivity analysis.

  15. Detecting Tipping points in Ecological Models with Sensitivity Analysis

    NARCIS (Netherlands)

    Broeke, ten G.A.; Voorn, van G.A.K.; Kooi, B.W.; Molenaar, Jaap

    2016-01-01

    Simulation models are commonly used to understand and predict the development of ecological systems, for instance to study the occurrence of tipping points and their possible ecological effects. Sensitivity analysis is a key tool in the study of model responses to changes in conditions. The appli

  16. Application of Stochastic Sensitivity Analysis to Integrated Force Method

    Directory of Open Access Journals (Sweden)

    X. F. Wei

    2012-01-01

    Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.

  17. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  18. Fixed point sensitivity analysis of interacting structured populations.

    Science.gov (United States)

    Barabás, György; Meszéna, Géza; Ostling, Annette

    2014-03-01

    Sensitivity analysis of structured populations is a useful tool in population ecology. Historically, methodological development of sensitivity analysis has focused on the sensitivity of eigenvalues in linear matrix models, and on single populations. More recently there have been extensions to the sensitivity of nonlinear models, and to communities of interacting populations. Here we derive a fully general mathematical expression for the sensitivity of equilibrium abundances in communities of interacting structured populations. Our method yields the response of an arbitrary function of the stage class abundances to perturbations of any model parameters. As a demonstration, we apply this sensitivity analysis to a two-species model of ontogenetic niche shift where each species has two stage classes, juveniles and adults. In the context of this model, we demonstrate that our theory is quite robust to violating two of its technical assumptions: the assumption that the community is at a point equilibrium and the assumption of infinitesimally small parameter perturbations. Our results on the sensitivity of a community are also interpreted in a niche theoretical context: we determine how the niche of a structured population is composed of the niches of the individual states, and how the sensitivity of the community depends on niche segregation.

  19. Sensitivity analysis of small circular cylinders as wake control

    Science.gov (United States)

    Meneghini, Julio; Patino, Gustavo; Gioria, Rafael

    2016-11-01

    We apply a sensitivity analysis to a steady external force regarding control vortex shedding from a circular cylinder using active and passive small control cylinders. We evaluate the changes on the flow produced by the device on the flow near the primary instability, transition to wake. We numerically predict by means of sensitivity analysis the effective regions to place the control devices. The quantitative effect of the hydrodynamic forces produced by the control devices is also obtained by a sensitivity analysis supporting the prediction of minimum rotation rate. These results are extrapolated for higher Reynolds. Also, the analysis provided the positions of combined passive control cylinders that suppress the wake. The latter shows that these particular positions for the devices are adequate to suppress the wake unsteadiness. In both cases the results agree very well with experimental cases of control devices previously published.

  20. Social network analysis applied to team sports analysis

    CERN Document Server

    Clemente, Filipe Manuel; Mendes, Rui Sousa

    2016-01-01

    Explaining how graph theory and social network analysis can be applied to team sports analysis, This book presents useful approaches, models and methods that can be used to characterise the overall properties of team networks and identify the prominence of each team player. Exploring the different possible network metrics that can be utilised in sports analysis, their possible applications and variances from situation to situation, the respective chapters present an array of illustrative case studies. Identifying the general concepts of social network analysis and network centrality metrics, readers are shown how to generate a methodological protocol for data collection. As such, the book provides a valuable resource for students of the sport sciences, sports engineering, applied computation and the social sciences.

  1. Enviromentally sensitive patch index of desertification risk applied to the main habitats of Sicily

    Science.gov (United States)

    Duro, A.; Piccione, V.; Ragusa, M. A.; Rapicavoli, V.; Veneziano, V.

    2017-07-01

    The authors applied the MEDALUS - Mediterranean Desertification and Land Use - procedure to the most representative sicilian habitat by extension, socio-economic and environmental importance, in order to assess the risk of desertification. Thanks to the ESPI, Environmentally Sensitive Patch Index, in this paper the authors estimate the current and future regional levels of desertification risk.

  2. Advancing sensitivity analysis to precisely characterize temporal parameter dominance

    Science.gov (United States)

    Guse, Björn; Pfannerstill, Matthias; Strauch, Michael; Reusser, Dominik; Lüdtke, Stefan; Volk, Martin; Gupta, Hoshin; Fohrer, Nicola

    2016-04-01

    Parameter sensitivity analysis is a strategy for detecting dominant model parameters. A temporal sensitivity analysis calculates daily sensitivities of model parameters. This allows a precise characterization of temporal patterns of parameter dominance and an identification of the related discharge conditions. To achieve this goal, the diagnostic information as derived from the temporal parameter sensitivity is advanced by including discharge information in three steps. In a first step, the temporal dynamics are analyzed by means of daily time series of parameter sensitivities. As sensitivity analysis method, we used the Fourier Amplitude Sensitivity Test (FAST) applied directly onto the modelled discharge. Next, the daily sensitivities are analyzed in combination with the flow duration curve (FDC). Through this step, we determine whether high sensitivities of model parameters are related to specific discharges. Finally, parameter sensitivities are separately analyzed for five segments of the FDC and presented as monthly averaged sensitivities. In this way, seasonal patterns of dominant model parameter are provided for each FDC segment. For this methodical approach, we used two contrasting catchments (upland and lowland catchment) to illustrate how parameter dominances change seasonally in different catchments. For all of the FDC segments, the groundwater parameters are dominant in the lowland catchment, while in the upland catchment the controlling parameters change seasonally between parameters from different runoff components. The three methodical steps lead to clear temporal patterns, which represent the typical characteristics of the study catchments. Our methodical approach thus provides a clear idea of how the hydrological dynamics are controlled by model parameters for certain discharge magnitudes during the year. Overall, these three methodical steps precisely characterize model parameters and improve the understanding of process dynamics in hydrological

  3. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    Science.gov (United States)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  4. Colilert® applied to food analysis

    Directory of Open Access Journals (Sweden)

    Maria José Rodrigues

    2014-06-01

    Full Text Available Colilert® (IDEXX was originally developed for the simultaneous enumeration of coliforms and E. coli in water samples and has been used for the quality control routine of drinking, swimming pools, fresh, coastal and waste waters (Grossi et al., 2013. The Colilert® culture medium contains the indicator nutrient 4-Methylumbelliferyl-β-D-Glucuronide (MUG. MUG acts as a substrate for the E. coli enzyme β-glucuronidase, from which a fluorescent compound is produced. A positive MUG result produces fluorescence when viewed under an ultraviolet lamp. If the test fluorescence is equal to or greater than that of the control, the presence of E. coli has been confirmed (Lopez-Roldan et al., 2013. The present work aimed to apply Colilert® to the enumeration of E. coli in different foods, through the comparison of results against the reference method (ISO 16649-2, 2001 for E. coli food analysis. The study was divided in two stages. During the first stage ten different types of foods were analyzed with Colilert®, these included pastry, raw meat, ready to eat meals, yogurt, raw seabream and salmon, and cooked shrimp. From these it were approved the following: pastry with custard; raw minced pork; soup "caldo-verde"; raw vegetable salad (lettuce and carrots and solid yogurt. The approved foods presented a better insertion in the tray, the colour of the wells was lighter and the UV reading was easier. In the second stage the foods were artificially contaminated with 2 log/g of E. coli (ATCC 25922 and analyzed. Colilert® proved to be an accurate method and the counts were similar to the ones obtained with the reference method. In the present study, the Colilert® method did not reveal neither false-positive or false-negative results, however sometimes the results were difficult to read due to the presence of green fluorescence in some wells. Generally Colilert® was an easy and rapid method, but less objective and more expensive than the reference method.

  5. Functional Analysis in Applied Mathematics and Engineering

    DEFF Research Database (Denmark)

    Pedersen, Michael

    1997-01-01

    Lecture notes for the course 01245 Functional Analysis. Consists of the first part of amonograph with the same title.......Lecture notes for the course 01245 Functional Analysis. Consists of the first part of amonograph with the same title....

  6. Sensitivity analysis of dynamic biological systems with time-delays.

    Science.gov (United States)

    Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang

    2010-10-15

    Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.

  7. SENSITIVITY ANALYSIS FOR PARAMETERIZED VARIATIONAL INEQUALITY PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Li Fei

    2004-01-01

    This paper presents sensitivity analysis for parameterized variational inequality problems (VIP). Under appropriate assumption, it is shown that the perturbed solution to parameterized VIP is existent, unique, continuous and differentiable with respect to perturbation parameter. In the case of differentiability, we derive the equations forcalculating the derivative of solution variables with respect to the perturbation parameters.

  8. Introduction: Conversation Analysis in Applied Linguistics

    Science.gov (United States)

    Sert, Olcay; Seedhouse, Paul

    2011-01-01

    This short, introductory paper presents an up-to-date account of works within the field of Applied Linguistics which have been influenced by a Conversation Analytic paradigm. The article reviews recent studies in classroom interaction, materials development, proficiency assessment and language teacher education. We believe that the publication of…

  9. Sensitivity Analysis of a Bioinspired Refractive Index Based Gas Sensor

    Institute of Scientific and Technical Information of China (English)

    Yang Gao; Qi Xia; Guanglan Liao; Tielin Shi

    2011-01-01

    It was found out that the change of refractive index of ambient gas can lead to obvious change of the color of Morpho butterfly's wing. Such phenomenon has been employed as a sensing principle for detecting gas. In the present study, Rigorous Coupled-Wave Analysis (RCWA) was described briefly, and the partial derivative of optical reflection efficiency with respect to the refractive index of ambient gas, i.e., sensitivity of the sensor, was derived based on RCWA. A bioinspired grating model was constructed by mimicking the nanostructure on the ground scale of Morpho didius butterfly's wing. The analytical sensitivity was verified and the effect of the grating shape on the reflection spectra and its sensitivity were discussed. The results show that by tuning shape parameters of the grating, we can obtain desired reflection spectra and sensitivity, which can be applied to the design of the bioinspired refractive index based gas sensor.

  10. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  11. NIR sensitivity analysis with the VANE

    Science.gov (United States)

    Carrillo, Justin T.; Goodin, Christopher T.; Baylot, Alex E.

    2016-05-01

    Near infrared (NIR) cameras, with peak sensitivity around 905-nm wavelengths, are increasingly used in object detection applications such as pedestrian detection, occupant detection in vehicles, and vehicle detection. In this work, we present the results of simulated sensitivity analysis for object detection with NIR cameras. The analysis was conducted using high performance computing (HPC) to determine the environmental effects on object detection in different terrains and environmental conditions. The Virtual Autonomous Navigation Environment (VANE) was used to simulate highresolution models for environment, terrain, vehicles, and sensors. In the experiment, an active fiducial marker was attached to the rear bumper of a vehicle. The camera was mounted on a following vehicle that trailed at varying standoff distances. Three different terrain conditions (rural, urban, and forest), two environmental conditions (clear and hazy), three different times of day (morning, noon, and evening), and six different standoff distances were used to perform the sensor sensitivity analysis. The NIR camera that was used for the simulation is the DMK firewire monochrome on a pan-tilt motor. Standoff distance was varied along with environment and environmental conditions to determine the critical failure points for the sensor. Feature matching was used to detect the markers in each frame of the simulation, and the percentage of frames in which one of the markers was detected was recorded. The standoff distance produced the biggest impact on the performance of the camera system, while the camera system was not sensitive to environment conditions.

  12. An approach of optimal sensitivity applied in the tertiary loop of the automatic generation control

    Energy Technology Data Exchange (ETDEWEB)

    Belati, Edmarcio A. [CIMATEC - SENAI, Salvador, BA (Brazil); Alves, Dilson A. [Electrical Engineering Department, FEIS, UNESP - Sao Paulo State University (Brazil); da Costa, Geraldo R.M. [Electrical Engineering Department, EESC, USP - Sao Paulo University (Brazil)

    2008-09-15

    This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (author)

  13. Multistructure Statistical Model Applied To Factor Analysis

    Science.gov (United States)

    Bentler, Peter M.

    1976-01-01

    A general statistical model for the multivariate analysis of mean and covariance structures is described. Matrix calculus is used to develop the statistical aspects of one new special case in detail. This special case separates the confounding of principal components and factor analysis. (DEP)

  14. Temporal Fourier analysis applied to equilibrium radionuclide cineangiography

    Energy Technology Data Exchange (ETDEWEB)

    Cardot, J.C.; Verdenet, J.; Bidet, A.; Bidet, R.; Berthout, P.; Faivre, R.; Bassand, J.P.; Maurat, J.P.

    1982-08-01

    Regional and global left ventricular wall motion was assessed in 120 patients using radionulcide cincangiography (RCA) and contrast angiography. Functional imaging procedures based on a temporal Fourier analysis of dynamic image sequences were applied to the study of cardiac contractility. Two images were constructed by taking the phase and amplitude values of the first harmonic in the Fourier transform for each pixel. These two images aided in determining the perimeter of the left ventricle to calculate the global ejection fraction. Regional left ventricular wall motion was studied by analyzing the phase value and by examining the distribution histogram of these values. The accuracy of global ejection fraction calculation was improved by the Fourier technique. This technique increased the sensitivity of RCA for determining segmental abnormalities especially in the left anterior oblique view (LAO).

  15. Shape sensitivity analysis in numerical modelling of solidification

    Directory of Open Access Journals (Sweden)

    E. Majchrzak

    2007-12-01

    Full Text Available The methods of sensitivity analysis constitute a very effective tool on the stage of numerical modelling of casting solidification. It is possible, among others, to rebuilt the basic numerical solution on the solution concerning the others disturbed values of physical and geometrical parameters of the process. In this paper the problem of shape sensitivity analysis is discussed. The non-homogeneous casting-mould domain is considered and the perturbation of the solidification process due to the changes of geometrical dimensions is analyzed. From the mathematical point of view the sensitivity model is rather complex but its solution gives the interesting information concerning the mutual connections between the kinetics of casting solidification and its basic dimensions. In the final part of the paper the example of computations is shown. On the stage of numerical realization the finite difference method has been applied.

  16. Sensitivity analysis of the critical speed in railway vehicle dynamics

    Science.gov (United States)

    Bigoni, D.; True, H.; Engsig-Karup, A. P.

    2014-05-01

    We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, high-dimensional model representation and total sensitivity indices. It is applied to a half car with a two-axle Cooperrider bogie, in order to study the sensitivity of the critical speed with respect to the suspension parameters. The importance of a certain suspension component is expressed by the variance in critical speed that is ascribable to it. This proves to be useful in the identification of parameters for which the accuracy of their values is critically important. The approach has a general applicability in many engineering fields and does not require the knowledge of the particular solver of the dynamical system. This analysis can be used as part of the virtual homologation procedure and to help engineers during the design phase of complex systems.

  17. Applied time series analysis and innovative computing

    CERN Document Server

    Ao, Sio-Iong

    2010-01-01

    This text is a systematic, state-of-the-art introduction to the use of innovative computing paradigms as an investigative tool for applications in time series analysis. It includes frontier case studies based on recent research.

  18. Functional Data Analysis Applied in Chemometrics

    DEFF Research Database (Denmark)

    Muller, Martha

    the worlds of statistics and chemometrics. We want to provide a glimpse of the essential and complex data pre-processing that is well known to chemometricians, but is generally unknown to statisticians. Pre-processing can potentially have a strong in uence on the results of consequent data analysis. Our......In this thesis we explore the use of functional data analysis as a method to analyse chemometric data, more specically spectral data in metabolomics. Functional data analysis is a vibrant eld in statistics. It has been rapidly expanding in both methodology and applications since it was made well...... known by Ramsay & Silverman's monograph in 1997. In functional data analysis, the data are curves instead of data points. Each curve is measured at discrete points along a continuum, for example, time or frequency. It is assumed that the underlying process generating the curves is smooth...

  19. Functional Data Analysis Applied in Chemometrics

    DEFF Research Database (Denmark)

    Muller, Martha

    In this thesis we explore the use of functional data analysis as a method to analyse chemometric data, more specically spectral data in metabolomics. Functional data analysis is a vibrant eld in statistics. It has been rapidly expanding in both methodology and applications since it was made well...... known by Ramsay & Silverman's monograph in 1997. In functional data analysis, the data are curves instead of data points. Each curve is measured at discrete points along a continuum, for example, time or frequency. It is assumed that the underlying process generating the curves is smooth......, but it is not assumed that the adjacent points measured along the continuum are independent. Standard chemometric methods originate from the eld of multivariate analysis, where variables are often assumed to be independent. Typically these methods do not explore the rich functional nature of spectral data. Metabolomics...

  20. An elementary introduction to applied signal analysis

    DEFF Research Database (Denmark)

    Jacobsen, Finn

    2000-01-01

    An introduction to some of the most fundamental concepts and methods of signal analysis and signal processing is presented with particular regard to acoustic measurements. The purpose is to give the reader so much basic knowledge of signal analysis that he can use modern digital equipment in some...... of the most important acoustic measurements, eg measurements of transfer functions of lightly damped multi-modal systems (rooms and structures)....

  1. An elementary introduction to applied signal analysis

    DEFF Research Database (Denmark)

    Jacobsen, Finn

    1997-01-01

    An introduction to some of the most fundamental concepts and methods of signal analysis and signal processing is presented with particular regard to acoustic measurements. The purpose is to give the reader so much basic knowledge of signal analysis that he can use modern digital equipment in some...... of the most important acoustic measurements, eg measurements of transfer functions of lightly damped multi-modal systems (rooms and structures)....

  2. Sensitivity of process design to uncertainties in property estimates applied to extractive distillation

    DEFF Research Database (Denmark)

    Jones, Mark Nicholas; Hukkerikar, Amol; Sin, Gürkan;

    through the calculation steps to such an extent that the final design might not be feasible or lead to poor performance. Therefore it is necessary to evaluate the sensitivity of process design to the uncertainties in property estimates obtained from thermo-physical property models. Uncertainty...... and sensitivity analysis can be combined to determine which properties are of critical importance from process design point of view and to establish an acceptable level of accuracy for different thermo-physical property methods employed. This helps the user to determine if additional property measurements...... in the laboratory are required or to find more accurate values in the literature. A tailor-made and more efficient experimentation schedule is the result. This work discusses a systematic methodology for performing analysis of sensitivity of process design to uncertainties in property estimates. The application...

  3. Applied modal analysis of wind turbine blades

    DEFF Research Database (Denmark)

    Pedersen, H.B.; Kristensen, O.J.D.

    2003-01-01

    In this project modal analysis has been used to determine the natural frequencies, damping and the mode shapes for wind turbine blades. Different methods to measure the position and adjust the direction of the measuring points are discussed. Differentequipment for mounting the accelerometers...... is investigated by repeated measurement on the same wind turbine blade. Furthermore the flexibility of the test set-up is investigated, by use ofaccelerometers mounted on the flexible adapter plate during the measurement campaign. One experimental campaign investigated the results obtained from a loaded...... and unloaded wind turbine blade. During this campaign the modal analysis are performed on ablade mounted in a horizontal and a vertical position respectively. Finally the results obtained from modal analysis carried out on a wind turbine blade are compared with results obtained from the Stig Øyes blade_EV1...

  4. Applied quantitative analysis in the social sciences

    CERN Document Server

    Petscher, Yaacov; Compton, Donald L

    2013-01-01

    To say that complex data analyses are ubiquitous in the education and social sciences might be an understatement. Funding agencies and peer-review journals alike require that researchers use the most appropriate models and methods for explaining phenomena. Univariate and multivariate data structures often require the application of more rigorous methods than basic correlational or analysis of variance models. Additionally, though a vast set of resources may exist on how to run analysis, difficulties may be encountered when explicit direction is not provided as to how one should run a model

  5. Reliability Sensitivity Analysis for Location Scale Family

    Institute of Scientific and Technical Information of China (English)

    洪东跑; 张海瑞

    2011-01-01

    Many products always operate under various complex environment conditions. To describe the dynamic influence of environment factors on their reliability, a method of reliability sensitivity analysis is proposed. In this method, the location parameter is assumed as a function of relevant environment variables while the scale parameter is assumed as an un- known positive constant. Then, the location parameter function is constructed by using the method of radial basis function. Using the varied environment test data, the log-likelihood function is transformed to a generalized linear expression by de- scribing the indicator as Poisson variable. With the generalized linear model, the maximum likelihood estimations of the model coefficients are obtained. With the reliability model, the reliability sensitivity is obtained. An instance analysis shows that the method is feasible to analyze the dynamic variety characters of reliability along with environment factors and is straightforward for engineering application.

  6. Applying Image Matching to Video Analysis

    Science.gov (United States)

    2010-09-01

    Database of Spent Cartridge Cases of Firearms". Forensic Science International . Page(s) 97-106. 2001. 21: Birchfield, S. "Derivation of Kanade-Lucas-Tomasi...Ortega-Garcia, J. "Bayesian Analysis of Fingerprint, Face and Signature Evidences with Automatic Biometric Systems". Forensic Science International . Vol

  7. Applied Spectrophotometry: Analysis of a Biochemical Mixture

    Science.gov (United States)

    Trumbo, Toni A.; Schultz, Emeric; Borland, Michael G.; Pugh, Michael Eugene

    2013-01-01

    Spectrophotometric analysis is essential for determining biomolecule concentration of a solution and is employed ubiquitously in biochemistry and molecular biology. The application of the Beer-Lambert-Bouguer Lawis routinely used to determine the concentration of DNA, RNA or protein. There is however a significant difference in determining the…

  8. Science, Skepticism, and Applied Behavior Analysis

    Science.gov (United States)

    Normand, Matthew P

    2008-01-01

    Pseudoscientific claims concerning medical and psychological treatments of all varieties are commonplace. As behavior analysts, a sound skeptical approach to our science and practice is essential. The present paper offers an overview of science and skepticism and discusses the relationship of skepticism to behavior analysis, with an emphasis on the types of issues concerning behavior analysts in practice. PMID:22477687

  9. Demonstration sensitivity analysis for RADTRAN III

    Energy Technology Data Exchange (ETDEWEB)

    Neuhauser, K S; Reardon, P C

    1986-10-01

    A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves.

  10. Structural Optimization of Slender Robot Arm Based on Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Zhong Luo

    2012-01-01

    Full Text Available An effective structural optimization method based on a sensitivity analysis is proposed to optimize the variable section of a slender robot arm. The structure mechanism and the operating principle of a polishing robot are introduced firstly, and its stiffness model is established. Then, a design of sensitivity analysis method and a sequential linear programming (SLP strategy are developed. At the beginning of the optimization, the design sensitivity analysis method is applied to select the sensitive design variables which can make the optimized results more efficient and accurate. In addition, it can also be used to determine the scale of moving step which will improve the convergency during the optimization process. The design sensitivities are calculated using the finite difference method. The search for the final optimal structure is performed using the SLP method. Simulation results show that the proposed structure optimization method is effective in enhancing the stiffness of the robot arm regardless of the robot arm suffering either a constant force or variable forces.

  11. Thermal analysis applied to irradiated propolis

    Science.gov (United States)

    Matsuda, Andrea Harumi; Machado, Luci Brocardo; del Mastro, Nélida Lucia

    2002-03-01

    Propolis is a resinous hive product, collected by bees. Raw propolis requires a decontamination procedure and irradiation appears as a promising technique for this purpose. The valuable properties of propolis for food and pharmaceutical industries have led to increasing interest in its technological behavior. Thermal analysis is a chemical analysis that gives information about changes on heating of great importance for technological applications. Ground propolis samples were 60Co gamma irradiated with 0 and 10 kGy. Thermogravimetry curves shown a similar multi-stage decomposition pattern for both irradiated and unirradiated samples up to 600°C. Similarly, through differential scanning calorimetry , a coincidence of melting point of irradiated and unirradiated samples was found. The results suggest that the irradiation process do not interfere on the thermal properties of propolis when irradiated up to 10 kGy.

  12. Applied bioinformatics: Genome annotation and transcriptome analysis

    DEFF Research Database (Denmark)

    Gupta, Vikas

    and dhurrin, which have not previously been characterized in blueberries. There are more than 44,500 spider species with distinct habitats and unique characteristics. Spiders are masters of producing silk webs to catch prey and using venom to neutralize. The exploration of the genetics behind these properties...... japonicus (Lotus), Vaccinium corymbosum (blueberry), Stegodyphus mimosarum (spider) and Trifolium occidentale (clover). From a bioinformatics data analysis perspective, my work can be divided into three parts; genome annotation, small RNA, and gene expression analysis. Lotus is a legume of significant...... has just started. We have assembled and annotated the first two spider genomes to facilitate our understanding of spiders at the molecular level. The need for analyzing the large and increasing amount of sequencing data has increased the demand for efficient, user friendly, and broadly applicable...

  13. Thermal analysis applied to irradiated propolis

    Energy Technology Data Exchange (ETDEWEB)

    Matsuda, Andrea Harumi; Machado, Luci Brocardo; Mastro, N.L. del E-mail: nelida@usp.br

    2002-03-01

    Propolis is a resinous hive product, collected by bees. Raw propolis requires a decontamination procedure and irradiation appears as a promising technique for this purpose. The valuable properties of propolis for food and pharmaceutical industries have led to increasing interest in its technological behavior. Thermal analysis is a chemical analysis that gives information about changes on heating of great importance for technological applications. Ground propolis samples were {sup 60}Co gamma irradiated with 0 and 10 kGy. Thermogravimetry curves shown a similar multi-stage decomposition pattern for both irradiated and unirradiated samples up to 600 deg. C. Similarly, through differential scanning calorimetry , a coincidence of melting point of irradiated and unirradiated samples was found. The results suggest that the irradiation process do not interfere on the thermal properties of propolis when irradiated up to 10 kGy.

  14. Sensitivity Analysis of Fire Dynamics Simulation

    DEFF Research Database (Denmark)

    Brohus, Henrik; Nielsen, Peter V.; Petersen, Arnkell J.

    2007-01-01

    equations require solution of the issues of combustion and gas radiation to mention a few. This paper performs a sensitivity analysis of a fire dynamics simulation on a benchmark case where measurement results are available for comparison. The analysis is performed using the method of Elementary Effects......In case of fire dynamics simulation requirements to reliable results are most often very high due to the severe consequences of erroneous results. At the same time it is a well known fact that fire dynamics simulation constitutes rather complex physical phenomena which apart from flow and energy...

  15. Applying centrality measures to impact analysis: A coauthorship network analysis

    CERN Document Server

    Yan, Erjia

    2010-01-01

    Many studies on coauthorship networks focus on network topology and network statistical mechanics. This article takes a different approach by studying micro-level network properties, with the aim to apply centrality measures to impact analysis. Using coauthorship data from 16 journals in the field of library and information science (LIS) with a time span of twenty years (1988-2007), we construct an evolving coauthorship network and calculate four centrality measures (closeness, betweenness, degree and PageRank) for authors in this network. We find out that the four centrality measures are significantly correlated with citation counts. We also discuss the usability of centrality measures in author ranking, and suggest that centrality measures can be useful indicators for impact analysis.

  16. A parameter estimation and identifiability analysis methodology applied to a street canyon air pollution model

    DEFF Research Database (Denmark)

    Ottosen, Thor Bjørn; Ketzel, Matthias; Skov, Henrik

    2016-01-01

    Pollution Model (OSPM®). To assess the predictive validity of the model, the data is split into an estimation and a prediction data set using two data splitting approaches and data preparation techniques (clustering and outlier detection) are analysed. The sensitivity analysis, being part......Mathematical models are increasingly used in environmental science thus increasing the importance of uncertainty and sensitivity analyses. In the present study, an iterative parameter estimation and identifiability analysis methodology is applied to an atmospheric model – the Operational Street...

  17. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...... are the most significant in each case. We apply the Sobol method, which is a quantitative method that gives the percentage of the total output variance that each parameter accounts for. The most important parameter is found to be the energy release rate that explains 92% of the uncertainty in the calculated...... results for the period before thermal penetration (tp) has occurred. The analysis is also done for all combinations of two parameters in order to find the combination with the largest effect. The Sobol total for pairs had the highest value for the combination of energy release rate and area of opening...

  18. Artificial intelligence applied to process signal analysis

    Science.gov (United States)

    Corsberg, Dan

    1988-01-01

    Many space station processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect of the human/machine interface is the analysis and display of process information. Human operators can be overwhelmed by large clusters of alarms that inhibit their ability to diagnose and respond to a disturbance. Using artificial intelligence techniques and a knowledge base approach to this problem, the power of the computer can be used to filter and analyze plant sensor data. This will provide operators with a better description of the process state. Once a process state is recognized, automatic action could be initiated and proper system response monitored.

  19. Artificial intelligence applied to process signal analysis

    Science.gov (United States)

    Corsberg, Dan

    1988-01-01

    Many space station processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect of the human/machine interface is the analysis and display of process information. Human operators can be overwhelmed by large clusters of alarms that inhibit their ability to diagnose and respond to a disturbance. Using artificial intelligence techniques and a knowledge base approach to this problem, the power of the computer can be used to filter and analyze plant sensor data. This will provide operators with a better description of the process state. Once a process state is recognized, automatic action could be initiated and proper system response monitored.

  20. Thermal transient analysis applied to horizontal wells

    Energy Technology Data Exchange (ETDEWEB)

    Duong, A.N. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[ConocoPhillips Canada Resources Corp., Calgary, AB (Canada)

    2008-10-15

    Steam assisted gravity drainage (SAGD) is a thermal recovery process used to recover bitumen and heavy oil. This paper presented a newly developed model to estimate cooling time and formation thermal diffusivity by using a thermal transient analysis along the horizontal wellbore under a steam heating process. This radial conduction heating model provides information on the heat influx distribution along a horizontal wellbore or elongated steam chamber, and is therefore important for determining the effectiveness of the heating process in the start-up phase in SAGD. Net heat flux estimation in the target formation during start-up can be difficult to measure because of uncertainties regarding heat loss in the vertical section; steam quality along the horizontal segment; distribution of steam along the wellbore; operational conditions; and additional effects of convection heating. The newly presented model can be considered analogous to pressure transient analysis of a buildup after a constant pressure drawdown. The model is based on an assumption of an infinite-acting system. This paper also proposed a new concept of a heating ring to measure the heat storage in the heated bitumen at the time of testing. Field observations were used to demonstrate how the model can be used to save heat energy, conserve steam and enhance bitumen recovery. 18 refs., 14 figs., 2 appendices.

  1. Photometric analysis applied in determining facial type

    Directory of Open Access Journals (Sweden)

    Luciana Flaquer Martins

    2012-10-01

    Full Text Available INTRODUCTION: In orthodontics, determining the facial type is a key element in the prescription of a correct diagnosis. In the early days of our specialty, observation and measurement of craniofacial structures were done directly on the face, in photographs or plaster casts. With the development of radiographic methods, cephalometric analysis replaced the direct facial analysis. Seeking to validate the analysis of facial soft tissues, this work compares two different methods used to determining the facial types, the anthropometric and the cephalometric methods. METHODS: The sample consisted of sixty-four Brazilian individuals, adults, Caucasian, of both genders, who agreed to participate in this research. All individuals had lateral cephalograms and facial frontal photographs. The facial types were determined by the Vert Index (cephalometric and the Facial Index (photographs. RESULTS: The agreement analysis (Kappa, made for both types of analysis, found an agreement of 76.5%. CONCLUSIONS: We concluded that the Facial Index can be used as an adjunct to orthodontic diagnosis, or as an alternative method for pre-selection of a sample, avoiding that research subjects have to undergo unnecessary tests.INTRODUÇÃO: em Ortodontia, a determinação do tipo facial é um elemento-chave na prescrição de um diagnóstico correto. Nos primórdios de nossa especialidade, a observação e a medição das estruturas craniofaciais eram feitas diretamente na face, em fotografias ou em modelos de gesso. Com o desenvolvimento dos métodos radiográficos, a análise cefalométrica foi substituindo a análise facial direta. Visando legitimar o estudo dos tecidos moles faciais, esse trabalho comparou a determinação do tipo facial pelos métodos antropométrico e cefalométrico. MÉTODOS: a amostra constou de sessenta e quatro indivíduos brasileiros, adultos, leucodermas, de ambos os sexos, que aceitaram participar da pesquisa. De todos os indivíduos da amostra

  2. Multivariate analysis applied to tomato hybrid production.

    Science.gov (United States)

    Balasch, S; Nuez, F; Palomares, G; Cuartero, J

    1984-11-01

    Twenty characters were measured on 60 tomato varieties cultivated in the open-air and in polyethylene plastic-house. Data were analyzed by means of principal components, factorial discriminant methods, Mahalanobis D(2) distances and principal coordinate techniques. Factorial discriminant and Mahalanobis D(2) distances methods, both of which require collecting data plant by plant, lead to similar conclusions as the principal components method that only requires taking data by plots. Characters that make up the principal components in both environments studied are the same, although the relative importance of each one of them varies within the principal components. By combining information supplied by multivariate analysis with the inheritance mode of characters, crossings among cultivars can be experimented with that will produce heterotic hybrids showing characters within previously established limits.

  3. Toward applied behavior analysis of life aloft

    Science.gov (United States)

    Brady, J. V.

    1990-01-01

    This article deals with systems at multiple levels, at least from cell to organization. It also deals with learning, decision making, and other behavior at multiple levels. Technological development of a human behavioral ecosystem appropriate to space environments requires an analytic and synthetic orientation, explicitly experimental in nature, dictated by scientific and pragmatic considerations, and closely approximating procedures of established effectiveness in other areas of natural science. The conceptual basis of such an approach has its roots in environmentalism which has two main features: (1) knowledge comes from experience rather than from innate ideas, divine revelation, or other obscure sources; and (2) action is governed by consequences rather than by instinct, reason, will, beliefs, attitudes or even the currently fashionable cognitions. Without an experimentally derived data base founded upon such a functional analysis of human behavior, the overgenerality of "ecological systems" approaches render them incapable of ensuring the successful establishment of enduring space habitats. Without an experimentally derived function account of individual behavioral variability, a natural science of behavior cannot exist. And without a natural science of behavior, the social sciences will necessarily remain in their current status as disciplines of less than optimal precision or utility. Such a functional analysis of human performance should provide an operational account of behavior change in a manner similar to the way in which Darwin's approach to natural selection accounted for the evolution of phylogenetic lines (i.e., in descriptive, nonteleological terms). Similarly, as Darwin's account has subsequently been shown to be consonant with information obtained at the cellular level, so too should behavior principles ultimately prove to be in accord with an account of ontogenetic adaptation at a biochemical level. It would thus seem obvious that the most

  4. Sensitivity Analysis of Automated Ice Edge Detection

    Science.gov (United States)

    Moen, Mari-Ann N.; Isaksem, Hugo; Debien, Annekatrien

    2016-08-01

    The importance of highly detailed and time sensitive ice charts has increased with the increasing interest in the Arctic for oil and gas, tourism, and shipping. Manual ice charts are prepared by national ice services of several Arctic countries. Methods are also being developed to automate this task. Kongsberg Satellite Services uses a method that detects ice edges within 15 minutes after image acquisition. This paper describes a sensitivity analysis of the ice edge, assessing to which ice concentration class from the manual ice charts it can be compared to. The ice edge is derived using the Ice Tracking from SAR Images (ITSARI) algorithm. RADARSAT-2 images of February 2011 are used, both for the manual ice charts and the automatic ice edges. The results show that the KSAT ice edge lies within ice concentration classes with very low ice concentration or open water.

  5. Digital photoelastic analysis applied to implant dentistry

    Science.gov (United States)

    Ramesh, K.; Hariprasad, M. P.; Bhuvanewari, S.

    2016-12-01

    Development of improved designs of implant systems in dentistry have necessitated the study of stress fields in the implant regions of the mandible/maxilla for better understanding of the biomechanics involved. Photoelasticity has been used for various studies related to dental implants in view of whole field visualization of maximum shear stress in the form of isochromatic contours. The potential of digital photoelasticity has not been fully exploited in the field of implant dentistry. In this paper, the fringe field in the vicinity of the connected implants (All-On-Four® concept) is analyzed using recent advances in digital photoelasticity. Initially, a novel 3-D photoelastic model making procedure, to closely mimic all the anatomical features of the human mandible is proposed. By choosing appropriate orientation of the model with respect to the light path, the essential region of interest were sought to be analysed while keeping the model under live loading conditions. Need for a sophisticated software module to carefully identify the model domain has been brought out. For data extraction, five-step method is used and isochromatics are evaluated by twelve fringe photoelasticity. In addition to the isochromatic fringe field, whole field isoclinic data is also obtained for the first time in implant dentistry, which could throw important information in improving the structural stability of the implant systems. Analysis is carried out for the implant in the molar as well as the incisor region. In addition, the interaction effects of loaded molar implant on the incisor area are also studied.

  6. Goals Analysis Procedure Guidelines for Applying the Goals Analysis Process

    Science.gov (United States)

    Motley, Albert E., III

    2000-01-01

    One of the key elements to successful project management is the establishment of the "right set of requirements", requirements that reflect the true customer needs and are consistent with the strategic goals and objectives of the participating organizations. A viable set of requirements implies that each individual requirement is a necessary element in satisfying the stated goals and that the entire set of requirements, taken as a whole, is sufficient to satisfy the stated goals. Unfortunately, it is the author's experience that during project formulation phases' many of the Systems Engineering customers do not conduct a rigorous analysis of the goals and objectives that drive the system requirements. As a result, the Systems Engineer is often provided with requirements that are vague, incomplete, and internally inconsistent. To complicate matters, most systems development methodologies assume that the customer provides unambiguous, comprehensive and concise requirements. This paper describes the specific steps of a Goals Analysis process applied by Systems Engineers at the NASA Langley Research Center during the formulation of requirements for research projects. The objective of Goals Analysis is to identify and explore all of the influencing factors that ultimately drive the system's requirements.

  7. Applied research of environmental monitoring using instrumental neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Young Sam; Moon, Jong Hwa; Chung, Young Ju

    1997-08-01

    This technical report is written as a guide book for applied research of environmental monitoring using Instrumental Neutron Activation Analysis. The contents are as followings; sampling and sample preparation as a airborne particulate matter, analytical methodologies, data evaluation and interpretation, basic statistical methods of data analysis applied in environmental pollution studies. (author). 23 refs., 7 tabs., 9 figs.

  8. Hands on applied finite element analysis application with ANSYS

    CERN Document Server

    Arslan, Mehmet Ali

    2015-01-01

    Hands on Applied Finite Element Analysis Application with Ansys is truly an extraordinary book that offers practical ways of tackling FEA problems in machine design and analysis. In this book, 35 good selection of example problems have been presented, offering students the opportunity to apply their knowledge to real engineering FEA problem solutions by guiding them with real life hands on experience.

  9. LCA data quality: sensitivity and uncertainty analysis.

    Science.gov (United States)

    Guo, M; Murphy, R J

    2012-10-01

    Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions.

  10. Sensitivity analysis techniques for models of human behavior.

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  11. Dynamic Resonance Sensitivity Analysis in Wind Farms

    DEFF Research Database (Denmark)

    Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei

    2017-01-01

    Unlike conventional power systems, where resonance frequencies are mainly determined by passive impedances, wind farms present a more complex situation, where the control systems of the power electronic converters introduce also active impedances. This paper presents an approach to find the reson......Unlike conventional power systems, where resonance frequencies are mainly determined by passive impedances, wind farms present a more complex situation, where the control systems of the power electronic converters introduce also active impedances. This paper presents an approach to find...... (PFs) are calculated by critical eigenvalue sensitivity analysis versus the entries of the MIMO matrix. The PF analysis locates the most exciting bus of the resonances, where can be the best location to install the passive or active filters to reduce the harmonic resonance problems. Time...

  12. Sensitivity analysis in discrete multiple criteria decision problems: on the siting of nuclear power plants

    NARCIS (Netherlands)

    Janssen, R.; Rietveld, P.

    1989-01-01

    Inclusion of evaluation methods in decision support systems gives way to extensive sensitivity analysis. In this article new methods for sensitivityanalysis are developed and applied to the siting of nuclear power plants in the Netherlands.

  13. A Sensitivity Analysis of SOLPS Plasma Detachment

    Science.gov (United States)

    Green, D. L.; Canik, J. M.; Eldon, D.; Meneghini, O.; AToM SciDAC Collaboration

    2016-10-01

    Predicting the scrape off layer plasma conditions required for the ITER plasma to achieve detachment is an important issue when considering divertor heat load management options that are compatible with desired core plasma operational scenarios. Given the complexity of the scrape off layer, such predictions often rely on an integrated model of plasma transport with many free parameters. However, the sensitivity of any given prediction to the choices made by the modeler is often overlooked due to the logistical difficulties in completing such a study. Here we utilize an OMFIT workflow to enable a sensitivity analysis of the midplane density at which detachment occurs within the SOLPS model. The workflow leverages the TaskFarmer technology developed at NERSC to launch many instances of the SOLPS integrated model in parallel to probe the high dimensional parameter space of SOLPS inputs. We examine both predictive and interpretive models where the plasma diffusion coefficients are chosen to match an empirical scaling for divertor heat flux width or experimental profiles respectively. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility, and is supported under Contracts DE-AC02-05CH11231, DE-AC05-00OR22725 and DE-SC0012656.

  14. Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN)

    Science.gov (United States)

    2015-04-01

    ARL-TR-7250 ● APR 2015 US Army Research Laboratory Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN...Characterization, and Sensitivity Analysis of Urea Nitrate (UN) by William M Sherrill Weapons and Materials Research Directorate...Characterization, and Sensitivity Analysis of Urea Nitrate (UN) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S

  15. Scalable analysis tools for sensitivity analysis and UQ (3160) results.

    Energy Technology Data Exchange (ETDEWEB)

    Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.

    2009-09-01

    The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.

  16. The ability of periorbitally applied antiglare products to improve contrast sensitivity in conditions of sunlight exposure.

    Science.gov (United States)

    DeBroff, Brian M; Pahk, Patricia J

    2003-07-01

    Sun glare decreases athletes' contrast sensitivity and impairs their ability to distinguish objects from background. Many commercial products claim to reduce glare but have not been proven effective in clinical studies. To determine whether glare-reducing products such as eye black grease and antiglare stickers reduce glare and improve contrast sensitivity during sunlight exposure. We tested 46 subjects for contrast sensitivity using a Pelli-Robson contrast chart. Each subject served as an internal control and then was randomized to either application of eye black grease, antiglare stickers, or petroleum jelly at the infraorbital rim. All testing was performed in conditions of unobstructed sunlight. Analysis of variance revealed a significant difference between eye black grease (mean +/- SD, Pelli-Robson value, 1.87 +/- 0.09 logMAR units) and antiglare stickers (1.75 +/- 0.14 logMAR units) in binocular testing (P =.02). No statistical difference was found between the groups in right eyes, left eyes, or in combined data from the right and left eyes. Paired t tests demonstrated a significant difference between control (mean +/- SD, 1.77 +/- 0.14 logMAR units) and eye black grease (1.87 +/- 0.09 logMAR units) in binocular testing (P =.04). There was also a significant difference between control (mean +/- SD, 1.65 +/- 0.05 logMAR units) and eye black grease (1.67 +/- 0.06 logMAR units) in combined data from the right and left eyes (P =.02). Eye black grease reduces glare and improves contrast sensitivity in conditions of sunlight exposure compared with the control and antiglare stickers in binocular testing.

  17. Sensitivity analysis of distributed volcanic source inversion

    Science.gov (United States)

    Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José

    2016-04-01

    A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep

  18. Parametric Variations Sensitivity Analysis on IM Discrete Speed Estimation

    Directory of Open Access Journals (Sweden)

    Mohamed BEN MESSAOUD

    2007-09-01

    Full Text Available Motivation: This paper will discuss sensitivity issues in rotor speed estimation for induction machine (IM drives using only voltage and current measurements. A supervised estimation algorithm is proposed with the aim to achieve good performances in the large variations of the speed. After a brief presentation on discrete feedback structure of the estimator formulated from d-q axis equations, we will expose its performances for machine parameters variations.Method: Hyperstability concept was applied to the synthesis adaptation low. A heuristic term is added to the algorithm to maintain good speed estimation factor in high speeds.Results: In simulation, the estimation error is maintained relatively low in wide range of speeds, and the robustness of the estimation algorithm is shown for machine parametric variations.Conclusions: Sensitivity analysis to motor parameter changes of proposed sensorless IM is then performed.

  19. Negative Reinforcement in Applied Behavior Analysis: An Emerging Technology.

    Science.gov (United States)

    Iwata, Brian A.

    1987-01-01

    The article describes three aspects of negative reinforcement as it relates to applied behavior analysis: behavior acquired or maintained through negative reinforcement, the treatment of negatively reinforced behavior, and negative reinforcement as therapy. Current research suggests the emergence of an applied technology on negative reinforcement.…

  20. Animal Research in the "Journal of Applied Behavior Analysis"

    Science.gov (United States)

    Edwards, Timothy L.; Poling, Alan

    2011-01-01

    This review summarizes the 6 studies with nonhuman animal subjects that have appeared in the "Journal of Applied Behavior Analysis" and offers suggestions for future research in this area. Two of the reviewed articles described translational research in which pigeons were used to illustrate and examine behavioral phenomena of applied significance…

  1. Applying Discourse Analysis in ELT: a Five Cs Model

    Institute of Scientific and Technical Information of China (English)

    肖巧慧

    2009-01-01

    Based on a discussion of definitions on Discourse analysis,discourse is regard as layers consist of five elements--cohesion, coherence, culture, critique and context. Moreover, we focus on applying DA in ELT.

  2. Applied Thinking for Intelligence Analysis: A Guide for Practitioners

    Directory of Open Access Journals (Sweden)

    Trista M. Bailey

    2015-03-01

    Full Text Available Book Review -- Applied Thinking for Intelligence Analysis: A Guide for Practitioners by Charles Vandepeer, PhD, Air Power Development Centre, Department of Defence, Canberra, Australia, 2014, 106 pages, ISBN 13: 9781925062045, Reviewed by Trista M. Bailey

  3. Sensitivity study of a semiautomatic supervised classifier applied to minerals from x-ray mapping images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Flesche, Harald

    1999-01-01

    spectroscopy (EDS) in a scanning electron microscope (SEM). Extensions to traditional multivariate statistical methods are applied to perform the classification. Training sets are grown from one or a few seed points by a method that ensures spatial and spectral closeness of observations. Spectral closeness...... to a small area in order to allow for the estimation of a variance-covariance matrix. This expansion is controlled by upper limits for the spatial and Euclidean spectral distances from the seed point. Second, after this initial expansion the growing of the training set is controlled by an upper limit...... training, a standard quadratic classifier is applied. The performance for each parameter setting is measured by the overall misclassification rate on an independently generated validation set. The classification method is presently used as a routine petrographical analysis method at Norsk Hydro Research...

  4. Sensitivity study of a semiautomatic supervised classifier applied to minerals from x-ray mapping images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Flesche, Harald

    2000-01-01

    spectroscopy (EDS) in a scanning electron microscope (SEM). Extensions to traditional multivariate statistical methods are applied to perform the classification. Training sets are grown from one or a few seed points by a method that ensures spatial and spectral closeness of observations. Spectral closeness...... to a small area in order to allow for the estimation of a variance-covariance matrix. This expansion is controlled by upper limits for the spatial and Euclidean spectral distances from the seed point. Second, after this initial expansion the growing of the training set is controlled by an upper limit...... training, a standard quadratic classifier is applied. The performance for each parameter setting is measured by the overall misclassification rate on an independently generated validation set. The classification method is presently used as a routine petrographical analysis method at Norsk Hydro Research...

  5. Longitudinal Genetic Analysis of Anxiety Sensitivity

    Science.gov (United States)

    Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.

    2012-01-01

    Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…

  6. SENSITIVE ERROR ANALYSIS OF CHAOS SYNCHRONIZATION

    Institute of Scientific and Technical Information of China (English)

    HUANG XIAN-GAO; XU JIAN-XUE; HUANG WEI; L(U) ZE-JUN

    2001-01-01

    We study the synchronizing sensitive errors of chaotic systems for adding other signals to the synchronizing signal.Based on the model of the Henon map masking, we examine the cause of the sensitive errors of chaos synchronization.The modulation ratio and the mean square error are defined to measure the synchronizing sensitive errors by quality.Numerical simulation results of the synchronizing sensitive errors are given for masking direct current, sinusoidal and speech signals, separately. Finally, we give the mean square error curves of chaos synchronizing sensitivity and threedimensional phase plots of the drive system and the response system for masking the three kinds of signals.

  7. Optimizing human activity patterns using global sensitivity analysis.

    Science.gov (United States)

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  8. Sensitivity analysis of retrovirus HTLV-1 transactivation.

    Science.gov (United States)

    Corradin, Alberto; Di Camillo, Barbara; Ciminale, Vincenzo; Toffolo, Gianna; Cobelli, Claudio

    2011-02-01

    Human T-cell leukemia virus type 1 is a human retrovirus endemic in many areas of the world. Although many studies indicated a key role of the viral protein Tax in the control of viral transcription, the mechanisms controlling HTLV-1 expression and its persistence in vivo are still poorly understood. To assess Tax effects on viral kinetics, we developed a HTLV-1 model. Two parameters that capture both its deterministic and stochastic behavior were quantified: Tax signal-to-noise ratio (SNR), which measures the effect of stochastic phenomena on Tax expression as the ratio between the protein steady-state level and the variance of the noise causing fluctuations around this value; t(1/2), a parameter representative of the duration of Tax transient expression pulses, that is, of Tax bursts due to stochastic phenomena. Sensitivity analysis indicates that the major determinant of Tax SNR is the transactivation constant, the system parameter weighting the enhancement of retrovirus transcription due to transactivation. In contrast, t(1/2) is strongly influenced by the degradation rate of the mRNA. In addition to shedding light into the mechanism of Tax transactivation, the obtained results are of potential interest for novel drug development strategies since the two parameters most affecting Tax transactivation can be experimentally tuned, e.g. by perturbing protein phosphorylation and by RNA interference.

  9. Supercritical extraction of oleaginous: parametric sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Santos M.M.

    2000-01-01

    Full Text Available The economy has become universal and competitive, thus the industries of vegetable oil extraction must advance in the sense of minimising production costs and, at the same time, generating products that obey more rigorous patterns of quality, including solutions that do not damage the environment. The conventional oilseed processing uses hexane as solvent. However, this solvent is toxic and highly flammable. Thus the search of substitutes for hexane in oleaginous extraction process has increased in the last years. The supercritical carbon dioxide is a potential substitute for hexane, but it is necessary more detailed studies to understand the phenomena taking place in such process. Thus, in this work a diffusive model for semi-continuous (batch for the solids and continuous for the solvent isothermal and isobaric extraction process using supercritical carbon dioxide is presented and submitted to a parametric sensitivity analysis by means of a factorial design in two levels. The model parameters were disturbed and their main effects analysed, so that it is possible to propose strategies for high performance operation.

  10. Further resolution enhancement of high-sensitivity laser scanning photothermal microscopy applied to mouse endogenous

    Science.gov (United States)

    Nakata, Kazuaki; Tsurui, Hiromichi; Kobayashi, Takayoshi

    2016-12-01

    Photothermal microscopy has intrinsically super resolution capability due to the bilinear dependence of signal intensity of pump and probe. In the present paper, we have made further resolution improvement of high-sensitivity laser scanning photothermal microscopy by applying non-linear detection. By this, the new method has the following advantages: (1) super resolution with 61% and 42% enhancement from the diffraction limit values of the probe and pump wavelengths, respectively, by a second-order non-linear scheme, (2) compact light source using inexpensive conventional diode lasers, (3) wide applicability to nonfluorescent materials such as gold nanoparticles (GNPs) and hematoxylin-eosin stained biological samples, (4) relative robustness to optical damage, and (5) a high-frame rate using a Galvano mirror. The maximum resolution is determined to be 160 nm in the second-order non-linear detection mode and 270 nm in the linear detection mode by the PT signal of GNPs. The pixel rate and frame rate for 300 × 300 pixel image are 50 μs and 4.5 s, respectively. The pixel and frame rate are shorter than the rates, which are 1 ms and 100 s, respectively, using the piezo-driven stage system.

  11. Characteristics of thermally reduced graphene oxide and applied for dye-sensitized solar cell counter electrode

    Energy Technology Data Exchange (ETDEWEB)

    Ho, Ching-Yuan, E-mail: cyho@cycu.edu.tw [Department of Mechanical Engineering, Chung Yuan Christian University, Chung-Li, Taiwan (China); Department of Chemistry, Center for Nanotechnology and Institute of Biomedical Technology, Chung Yuan Christian University, Chung-Li, Taiwan (China); Wang, Hong-Wen [Department of Chemistry, Center for Nanotechnology and Institute of Biomedical Technology, Chung Yuan Christian University, Chung-Li, Taiwan (China); Department of Chemistry, Chung Yuan Christian University, Chung-Li, Taiwan (China)

    2015-12-01

    Graphical abstract: Experimental process: (1) graphite oxidized to graphene oxide; (2) thermal reduction from graphene oxide to graphene; (3) applying to DSSC counter electrode. - Highlights: • Intercalated defects were eliminated by increasing reduction temperature of GO. • High reduction temperature of tGP has lower resistance, high the electron lifetime. • Higher thermal reduction of GO proposes electrocatalytic properties. • DSSC using tGP{sub 250} as counter electrode has energy conversion efficiency of 3.4%. - Abstract: Graphene oxide (GO) was synthesized from a flake-type of graphite powder, which was then reduced to a few layers of graphene sheets using the thermal reduction method. The surface morphology, phase crystallization, and defect states of the reduced graphene were determined from an electron microscope equipped with an energy dispersion spectrometer, X-ray diffraction, Raman spectroscopy, and infrared spectra. After graphene formation, the intercalated defects that existed in the GO were removed, and it became crystalline by observing impurity changes and d-spacing. Dye-sensitized solar cells, using reduced graphene as the counter electrode, were fabricated to evaluate the electrolyte activity and charge transport performance. The electrochemical impedance spectra showed that increasing the thermal reduction temperature could achieve faster electron transport and longer electron lifetime, and result in an energy conversion efficiency of approximately 3.4%. Compared to the Pt counter electrode, the low cost of the thermal reduction method suggests that graphene will enjoy a wide range of potential applications in the field of electronic devices.

  12. Synthesis and photovoltaic properties of octacarboxy-metallophthalocyanine dyes applied in dye-sensitized solar cells

    Directory of Open Access Journals (Sweden)

    Jin Ling

    2012-01-01

    Full Text Available A series of octacarboxy-metallophthalocyanine dyes, i.e., MgOCPc, MnOCPc, FeOCPc and ZnOCPc with different central metal ions were designed and synthesized by microwave irradiation. The effects of the introduction of different metal ions with variant 3d orbitals (3d0, 3d5, 3d6, and 3d10, respectively in the centre of the phthalocyanine rings on the thermal, photophysical, and electrochemical properties of octacarboxy-metallophthalocyanines were characterized and evaluated in details. The results showed that ZnOCPc and MgOCPc with closed-shell metal ions and FeOCPc with an open-shell metal ion had excellent thermal property. However, MnOCPc with a half-full-shell metal ion had a lowest decomposition temperature and largest Q band red shifts. By theoretical calculation, the energy gaps of MgOCPc, MnOCPc, FeOCPc and ZnOCPc were 0.11, 0.10, 0.20 and 0.22V, respectively. Applied in TiO2 nanocrystalline dye-sensitized solar cells (DSSC, the photovoltaic properties of the four dyes were obtained under AM1.5 irradiation (100 mW cm-2.

  13. An addendum on sensitivity analysis of the optimal assignment

    NARCIS (Netherlands)

    Volgenant, A.

    2006-01-01

    We point out that sensitivity results for the linear assignment problem can be produced by a shortest path based approach in a straightforward manner and as efficient as finding an optimal solution. Keywords: Assignment; Sensitivity analysis

  14. Quantitative Analysis of the Interdisciplinarity of Applied Mathematics.

    Science.gov (United States)

    Xie, Zheng; Duan, Xiaojun; Ouyang, Zhenzheng; Zhang, Pengyuan

    2015-01-01

    The increasing use of mathematical techniques in scientific research leads to the interdisciplinarity of applied mathematics. This viewpoint is validated quantitatively here by statistical and network analysis on the corpus PNAS 1999-2013. A network describing the interdisciplinary relationships between disciplines in a panoramic view is built based on the corpus. Specific network indicators show the hub role of applied mathematics in interdisciplinary research. The statistical analysis on the corpus content finds that algorithms, a primary topic of applied mathematics, positively correlates, increasingly co-occurs, and has an equilibrium relationship in the long-run with certain typical research paradigms and methodologies. The finding can be understood as an intrinsic cause of the interdisciplinarity of applied mathematics.

  15. Applied data analysis and modeling for energy engineers and scientists

    CERN Document Server

    Reddy, T Agami

    2011-01-01

    ""Applied Data Analysis and Modeling for Energy Engineers and Scientists"" discusses mathematical models, data analysis, and decision analysis in modeling. The approach taken in this volume focuses on the modeling and analysis of thermal systems in an engineering environment, while also covering a number of other critical areas. Other material covered includes the tools that researchers and engineering professionals will need in order to explore different analysis methods, use critical assessment skills and reach sound engineering conclusions. The book also covers process and system design and

  16. Negative reinforcement in applied behavior analysis: an emerging technology.

    OpenAIRE

    Iwata, B A

    1987-01-01

    Although the effects of negative reinforcement on human behavior have been studied for a number of years, a comprehensive body of applied research does not exist at this time. This article describes three aspects of negative reinforcement as it relates to applied behavior analysis: behavior acquired or maintained through negative reinforcement, the treatment of negatively reinforced behavior, and negative reinforcement as therapy. A consideration of research currently being done in these area...

  17. Sensitivity of land surface modeling to parameters: An uncertainty quantification method applied to the Community Land Model

    Science.gov (United States)

    Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.

    2015-12-01

    Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes

  18. Application of sensitivity analysis in building energy simulations: combining first and second order elementary effects Methods

    CERN Document Server

    Sanchez, David Garcia; Musy, Marjorie; Bourges, Bernard

    2012-01-01

    Sensitivity analysis plays an important role in the understanding of complex models. It helps to identify influence of input parameters in relation to the outputs. It can be also a tool to understand the behavior of the model and then can help in its development stage. This study aims to analyze and illustrate the potential usefulness of combining first and second-order sensitivity analysis, applied to a building energy model (ESP-r). Through the example of a collective building, a sensitivity analysis is performed using the method of elementary effects (also known as Morris method), including an analysis of interactions between the input parameters (second order analysis). Importance of higher-order analysis to better support the results of first order analysis, highlighted especially in such complex model. Several aspects are tackled to implement efficiently the multi-order sensitivity analysis: interval size of the variables, management of non-linearity, usefulness of various outputs.

  19. Spectral analysis and filter theory in applied geophysics

    CERN Document Server

    Buttkus, Burkhard

    2000-01-01

    This book is intended to be an introduction to the fundamentals and methods of spectral analysis and filter theory and their appli­ cations in geophysics. The principles and theoretical basis of the various methods are described, their efficiency and effectiveness eval­ uated, and instructions provided for their practical application. Be­ sides the conventional methods, newer methods arediscussed, such as the spectral analysis ofrandom processes by fitting models to the ob­ served data, maximum-entropy spectral analysis and maximum-like­ lihood spectral analysis, the Wiener and Kalman filtering methods, homomorphic deconvolution, and adaptive methods for nonstation­ ary processes. Multidimensional spectral analysis and filtering, as well as multichannel filters, are given extensive treatment. The book provides a survey of the state-of-the-art of spectral analysis and fil­ ter theory. The importance and possibilities ofspectral analysis and filter theory in geophysics for data acquisition, processing an...

  20. Applying Frequency Map Analysis to the Australian Synchrotron Storage Ring

    CERN Document Server

    Tan, Yaw-Ren E; Le Blanc, Gregory Scott

    2005-01-01

    The technique of frequency map analysis has been applied to study the transverse dynamic aperture of the Australian Synchrotron Storage Ring. The results have been used to set the strengths of sextupoles to optimise the dynamic aperture. The effects of the allowed harmonics in the quadrupoles and dipole edge effects are discussed.

  1. Sensitivity Analysis of a Riparian Vegetation Growth Model

    Directory of Open Access Journals (Sweden)

    Michael Nones

    2016-11-01

    Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.

  2. A discourse on sensitivity analysis for discretely-modeled structures

    Science.gov (United States)

    Adelman, Howard M.; Haftka, Raphael T.

    1991-01-01

    A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.

  3. Applying model simulation and photochemical indicators to evaluate ozone sensitivity in southern Taiwan

    Institute of Scientific and Technical Information of China (English)

    Yen-Ping Peng1; Kang-Shin Chen2; Hsin-Kai Wang2; Chia-Hsiang Lai3; Ming-Hsun Lin4; Cheng-Haw Lee4

    2011-01-01

    Ozone sensitivity was investigated using CAMx simulations and photochemical indicator ratios at three sites (Pingtung City, ChaoChou Town, and Kenting Town) in Pingtung County in southern Taiwan during 2003 and 2004.The CAMx simulations compared fairly well with the hourly concentrations of ozone.Simulation results also showed that Pingtung City was mainly a volatile organic compounds (VOC)-sensitive regime, while Chao-Chou Town was either a VOC-sensitive or a NOx-sensitive regime, depending on the seasons.Measurements of three photochemical indicators (H202, HNO3, and NOy) were conducted, and simulated three transition ranges of H202/HNO3 (0.5-0.8), O3/HNO3 (10.3-16.2) and O3/NOy (5.7-10.8) were adopted to assess the ozone sensitive regime at the three sites.The results indicated that the three transition ranges yield consistent results with CAMx simulations at most times at Pingtung City.However, both VOC-sensitive and NOx-sensitive regimes were important at the rural site Chao-Chou Town.Kenting Town, a touring site at the southern end of Taiwan, was predominated by a NOx-sensitive regime in four seasons.

  4. Design sensitivity analysis of dynamic responses for a BLDC motor with mechanical and electromagnetic interactions

    Science.gov (United States)

    Im, Hyungbin; Bae, Dae Sung; Chung, Jintai

    2012-04-01

    This paper presents a design sensitivity analysis of dynamic responses of a BLDC motor with mechanical and electromagnetic interactions. Based on the equations of motion which consider mechanical and electromagnetic interactions of the motor, the sensitivity equations for the dynamic responses were derived by applying the direct differential method. From the sensitivity equation along with the equations of motion, the time responses for the sensitivity analysis were obtained by using the Newmark time integration method. The sensitivities of the motor performances such as the electromagnetic torque, rotating speed, and vibration level were analyzed for the six design parameters of rotor mass, shaft/bearing stiffness, rotor eccentricity, winding resistance, coil turn number, and residual magnetic flux density. Furthermore, to achieve a higher torque, higher speed, and lower vibration level, a new BLDC motor was designed by applying the multi-objective function method. It was found that all three performances are sensitive to the design parameters in the order of the coil turn number, magnetic flux density, rotor mass, winding resistance, rotor eccentricity, and stiffness. It was also found that the torque and vibration level are more sensitive to the parameters than the rotating speed. Finally, by applying the sensitivity analysis results, a new optimized design of the motor resulted in better performances. The newly designed motor showed an improved torque, rotating speed, and vibration level.

  5. Space Shuttle Orbiter entry guidance and control system sensitivity analysis

    Science.gov (United States)

    Stone, H. W.; Powell, R. W.

    1976-01-01

    An approach has been developed to determine the guidance and control system sensitivity to off-nominal aerodynamics for the Space Shuttle Orbiter during entry. This approach, which uses a nonlinear six-degree-of-freedom interactive, digital simulation, has been applied to both the longitudinal and lateral-directional axes for a portion of the orbiter entry. Boundary values for each of the aerodynamic parameters have been identified, the key parameters have been determined, and system modifications that will increase system tolerance to off-nominal aerodynamics have been recommended. The simulations were judged by specified criteria and the performance was evaluated by use of key dependent variables. The analysis is now being expanded to include the latest shuttle guidance and control systems throughout the entry speed range.

  6. APPLYING OF GAS ANALYSIS IN DIAGNOSIS OF BRONCHOPULMONARY DISEASES

    Directory of Open Access Journals (Sweden)

    Ye. B. Bukreyeva

    2014-01-01

    Full Text Available Bronchopulmonary system diseases are on the first place among the causes of people's death. Most of methods for lung diseases diagnosis are invasive or not suitable for children and patients with severe disease. One of the promising methods of clinical diagnosis and disease activity monitoring of bronchopulmonary system is analyzing of human breath. Directly exhaled breath or exhaled breath condensate are using for human breaths analyzing. Analysis of human breath can apply for diagnostic, long monitoring and evaluation of efficacy of the treatment bronchopulmonary diseases. Differential diagnostic between chronic obstructive lung disease (COPD and bronchial asthma is complicated because they have differences in pathogenesis. Analysis of human breath allows to explore features of COPD and bronchial asthma and to improve differential diagnostic of these diseases. Human breaths analyzing can apply for diagnostic dangerous diseases, such as tuberculosis, lung cancer. The analysis of breath air by spectroscopy methods is new noninvasive way for diagnosis of bronchopulmonary diseases.

  7. Research in applied mathematics, numerical analysis, and computer science

    Science.gov (United States)

    1984-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.

  8. Sensitivity of fish density estimates to standard analytical procedures applied to Great Lakes hydroacoustic data

    Science.gov (United States)

    Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.

    2013-01-01

    Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.

  9. The Split-Apply-Combine Strategy for Data Analysis

    Directory of Open Access Journals (Sweden)

    Hadley Wickham

    2011-04-01

    Full Text Available Many data analysis problems involve the application of a split-apply-combine strategy, where you break up a big problem into manageable pieces, operate on each piece independently and then put all the pieces back together. This insight gives rise to a new R package that allows you to smoothly apply this strategy, without having to worry about the type of structure in which your data is stored.The paper includes two case studies showing how these insights make it easier to work with batting records for veteran baseball players and a large 3d array of spatio-temporal ozone measurements.

  10. Paratingent Derivative Applied to the Measure of the Sensitivity in Multiobjective Differential Programming

    Directory of Open Access Journals (Sweden)

    F. García

    2013-01-01

    Full Text Available We analyse the sensitivity of differential programs of the form subject to where and are maps whose respective images lie in ordered Banach spaces. Following previous works on multiobjective programming, the notion of -optimal solution is used. The behaviour of some nonsingleton sets of -optimal solutions according to changes of the parameter in the problem is analysed. The main result of the work states that the sensitivity of the program is measured by a Lagrange multiplier plus a projection of its derivative. This sensitivity is measured by means of the paratingent derivative.

  11. Implementation of efficient sensitivity analysis for optimization of large structures

    Science.gov (United States)

    Umaretiya, J. R.; Kamil, H.

    1990-01-01

    The paper presents the theoretical bases and implementation techniques of sensitivity analyses for efficient structural optimization of large structures, based on finite element static and dynamic analysis methods. The sensitivity analyses have been implemented in conjunction with two methods for optimization, namely, the Mathematical Programming and Optimality Criteria methods. The paper discusses the implementation of the sensitivity analysis method into our in-house software package, AutoDesign.

  12. Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.

    Science.gov (United States)

    Kiparissides, A; Hatzimanikatis, V

    2017-01-01

    The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements.

  13. Parametric sensitivity analysis of a test cell thermal model using spectral analysis

    CERN Document Server

    Mara, Thierry Alex; Garde, François

    2012-01-01

    The paper deals with an empirical validation of a building thermal model. We put the emphasis on sensitivity analysis and on research of inputs/residual correlation to improve our model. In this article, we apply a sensitivity analysis technique in the frequency domain to point out the more important parameters of the model. Then, we compare measured and predicted data of indoor dry-air temperature. When the model is not accurate enough, recourse to time-frequency analysis is of great help to identify the inputs responsible for the major part of error. In our approach, two samples of experimental data are required. The first one is used to calibrate our model the second one to really validate the optimized model.

  14. Sensitivity Analysis of Situational Awareness Measures

    Science.gov (United States)

    Shively, R. J.; Davison, H. J.; Burdick, M. D.; Rutkowski, Michael (Technical Monitor)

    2000-01-01

    A great deal of effort has been invested in attempts to define situational awareness, and subsequently to measure this construct. However, relatively less work has focused on the sensitivity of these measures to manipulations that affect the SA of the pilot. This investigation was designed to manipulate SA and examine the sensitivity of commonly used measures of SA. In this experiment, we tested the most commonly accepted measures of SA: SAGAT, objective performance measures, and SART, against different levels of SA manipulation to determine the sensitivity of such measures in the rotorcraft flight environment. SAGAT is a measure in which the simulation blanks in the middle of a trial and the pilot is asked specific, situation-relevant questions about the state of the aircraft or the objective of a particular maneuver. In this experiment, after the pilot responded verbally to several questions, the trial continued from the point frozen. SART is a post-trial questionnaire that asked for subjective SA ratings from the pilot at certain points in the previous flight. The objective performance measures included: contacts with hazards (power lines and towers) that impeded the flight path, lateral and vertical anticipation of these hazards, response time to detection of other air traffic, and response time until an aberrant fuel gauge was detected. An SA manipulation of the flight environment was chosen that undisputedly affects a pilot's SA-- visibility. Four variations of weather conditions (clear, light rain, haze, and fog) resulted in a different level of visibility for each trial. Pilot SA was measured by either SAGAT or the objective performance measures within each level of visibility. This enabled us to not only determine the sensitivity within a measure, but also between the measures. The SART questionnaire and the NASA-TLX, a measure of workload, were distributed after every trial. Using the newly developed rotorcraft part-task laboratory (RPTL) at NASA Ames

  15. Boston Society's 11th Annual Applied Pharmaceutical Analysis conference.

    Science.gov (United States)

    Lee, Violet; Liu, Ang; Groeber, Elizabeth; Moghaddam, Mehran; Schiller, James; Tweed, Joseph A; Walker, Gregory S

    2016-02-01

    Boston Society's 11th Annual Applied Pharmaceutical Analysis conference, Hyatt Regency Hotel, Cambridge, MA, USA, 14-16 September 2015 The Boston Society's 11th Annual Applied Pharmaceutical Analysis (APA) conference took place at the Hyatt Regency hotel in Cambridge, MA, on 14-16 September 2015. The 3-day conference affords pharmaceutical professionals, academic researchers and industry regulators the opportunity to collectively participate in meaningful and relevant discussions impacting the areas of pharmaceutical drug development. The APA conference was organized in three workshops encompassing the disciplines of regulated bioanalysis, discovery bioanalysis (encompassing new and emerging technologies) and biotransformation. The conference included a short course titled 'Bioanalytical considerations for the clinical development of antibody-drug conjugates (ADCs)', an engaging poster session, several panel and round table discussions and over 50 diverse talks from leading industry and academic scientists.

  16. Sensitivity analysis of soil parameters based on interval

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Interval analysis is a new uncertainty analysis method for engineering struc-tures. In this paper, a new sensitivity analysis method is presented by introducing interval analysis which can expand applications of the interval analysis method. The interval anal-ysis process of sensitivity factor matrix of soil parameters is given. A method of parameter intervals and decision-making target intervals is given according to the interval analysis method. With FEM, secondary developments are done for Marc and the Duncan-Chang nonlinear elastic model. Mutual transfer between FORTRAN and Marc is implemented. With practial examples, rationality and feasibility are validated. Comparison is made with some published results.

  17. Towards More Efficient and Effective Global Sensitivity Analysis

    Science.gov (United States)

    Razavi, Saman; Gupta, Hoshin

    2014-05-01

    Sensitivity analysis (SA) is an important paradigm in the context of model development and application. There are a variety of approaches towards sensitivity analysis that formally describe different "intuitive" understandings of the sensitivity of a single or multiple model responses to different factors such as model parameters or forcings. These approaches are based on different philosophies and theoretical definitions of sensitivity and range from simple local derivatives to rigorous Sobol-type analysis-of-variance approaches. In general, different SA methods focus and identify different properties of the model response and may lead to different, sometimes even conflicting conclusions about the underlying sensitivities. This presentation revisits the theoretical basis for sensitivity analysis, critically evaluates the existing approaches in the literature, and demonstrates their shortcomings through simple examples. Important properties of response surfaces that are associated with the understanding and interpretation of sensitivities are outlined. A new approach towards global sensitivity analysis is developed that attempts to encompass the important, sensitivity-related properties of response surfaces. Preliminary results show that the new approach is superior to the standard approaches in the literature in terms of effectiveness and efficiency.

  18. Finite Element Analysis Applied to Dentoalveolar Trauma: Methodology Description

    OpenAIRE

    2011-01-01

    Dentoalveolar traumatic injuries are among the clinical conditions most frequently treated in dental practice. However, few studies so far have addressed the biomechanical aspects of these events, probably as a result of difficulties in carrying out satisfactory experimental and clinical studies as well as the unavailability of truly scientific methodologies. The aim of this paper was to describe the use of finite element analysis applied to the biomechanical evaluation of dentoalveolar traum...

  19. An applied ethics analysis of best practice tourism entrepreneurs

    OpenAIRE

    2015-01-01

    Ethical entrepreneurship and by extension wider best practice are noble goals for the future of tourism. However, questions arise which concepts, such as values motivations, actions and challenges underpin these goals. This thesis seeks to answers these questions and in so doing develop an applied ethics analysis for best practice entrepreneurs in tourism. The research is situated in sustainable tourism, which is ethically very complex and has thus far been dominated by the economic, social a...

  20. Nonstandard Analysis Applied to Advanced Undergraduate Mathematics - Infinitesimal Modeling

    OpenAIRE

    Herrmann, Robert A.

    2003-01-01

    This is a Research and Instructional Development Project from the U. S. Naval Academy. In this monograph, the basic methods of nonstandard analysis for n-dimensional Euclidean spaces are presented. Specific rules are deveoped and these methods and rules are applied to rigorous integral and differential modeling. The topics include Robinson infinitesimals, limited and infinite numbers; convergence theory, continuity, *-transfer, internal definition, hyprefinite summation, Riemann-Stieltjes int...

  1. Recent reinforcement-schedule research and applied behavior analysis

    OpenAIRE

    Lattal, Kennon A; Neef, Nancy A

    1996-01-01

    Reinforcement schedules are considered in relation to applied behavior analysis by examining several recent laboratory experiments with humans and other animals. The experiments are drawn from three areas of contemporary schedule research: behavioral history effects on schedule performance, the role of instructions in schedule performance of humans, and dynamic schedules of reinforcement. All of the experiments are discussed in relation to the role of behavioral history in current schedule pe...

  2. Magnetic Solid Phase Extraction Applied to Food Analysis

    Directory of Open Access Journals (Sweden)

    Israel S. Ibarra

    2015-01-01

    Full Text Available Magnetic solid phase extraction has been used as pretreatment technique for the analysis of several compounds because of its advantages when it is compared with classic methods. This methodology is based on the use of magnetic solids as adsorbents for preconcentration of different analytes from complex matrices. Magnetic solid phase extraction minimizes the use of additional steps such as precipitation, centrifugation, and filtration which decreases the manipulation of the sample. In this review, we describe the main procedures used for synthesis, characterization, and application of this pretreatment technique which were applied in food analysis.

  3. Harmonic and applied analysis from groups to signals

    CERN Document Server

    Mari, Filippo; Grohs, Philipp; Labate, Demetrio

    2015-01-01

    This contributed volume explores the connection between the theoretical aspects of harmonic analysis and the construction of advanced multiscale representations that have emerged in signal and image processing. It highlights some of the most promising mathematical developments in harmonic analysis in the last decade brought about by the interplay among different areas of abstract and applied mathematics. This intertwining of ideas is considered starting from the theory of unitary group representations and leading to the construction of very efficient schemes for the analysis of multidimensional data. After an introductory chapter surveying the scientific significance of classical and more advanced multiscale methods, chapters cover such topics as An overview of Lie theory focused on common applications in signal analysis, including the wavelet representation of the affine group, the Schrödinger representation of the Heisenberg group, and the metaplectic representation of the symplectic group An introduction ...

  4. Development of a CSP plant energy yield calculation tool applying predictive models to analyze plant performance sensitivities

    Science.gov (United States)

    Haack, Lukas; Peniche, Ricardo; Sommer, Lutz; Kather, Alfons

    2017-06-01

    At early project stages, the main CSP plant design parameters such as turbine capacity, solar field size, and thermal storage capacity are varied during the techno-economic optimization to determine most suitable plant configurations. In general, a typical meteorological year with at least hourly time resolution is used to analyze each plant configuration. Different software tools are available to simulate the annual energy yield. Software tools offering a thermodynamic modeling approach of the power block and the CSP thermal cycle, such as EBSILONProfessional®, allow a flexible definition of plant topologies. In EBSILON, the thermodynamic equilibrium for each time step is calculated iteratively (quasi steady state), which requires approximately 45 minutes to process one year with hourly time resolution. For better presentation of gradients, 10 min time resolution is recommended, which increases processing time by a factor of 5. Therefore, analyzing a large number of plant sensitivities, as required during the techno-economic optimization procedure, the detailed thermodynamic simulation approach becomes impracticable. Suntrace has developed an in-house CSP-Simulation tool (CSPsim), based on EBSILON and applying predictive models, to approximate the CSP plant performance for central receiver and parabolic trough technology. CSPsim significantly increases the speed of energy yield calculations by factor ≥ 35 and has automated the simulation run of all predefined design configurations in sequential order during the optimization procedure. To develop the predictive models, multiple linear regression techniques and Design of Experiment methods are applied. The annual energy yield and derived LCOE calculated by the predictive model deviates less than ±1.5 % from the thermodynamic simulation in EBSILON and effectively identifies the optimal range of main design parameters for further, more specific analysis.

  5. Sensitivity analyses of spatial population viability analysis models for species at risk and habitat conservation planning.

    Science.gov (United States)

    Naujokaitis-Lewis, Ilona R; Curtis, Janelle M R; Arcese, Peter; Rosenfeld, Jordan

    2009-02-01

    Population viability analysis (PVA) is an effective framework for modeling species- and habitat-recovery efforts, but uncertainty in parameter estimates and model structure can lead to unreliable predictions. Integrating complex and often uncertain information into spatial PVA models requires that comprehensive sensitivity analyses be applied to explore the influence of spatial and nonspatial parameters on model predictions. We reviewed 87 analyses of spatial demographic PVA models of plants and animals to identify common approaches to sensitivity analysis in recent publications. In contrast to best practices recommended in the broader modeling community, sensitivity analyses of spatial PVAs were typically ad hoc, inconsistent, and difficult to compare. Most studies applied local approaches to sensitivity analyses, but few varied multiple parameters simultaneously. A lack of standards for sensitivity analysis and reporting in spatial PVAs has the potential to compromise the ability to learn collectively from PVA results, accurately interpret results in cases where model relationships include nonlinearities and interactions, prioritize monitoring and management actions, and ensure conservation-planning decisions are robust to uncertainties in spatial and nonspatial parameters. Our review underscores the need to develop tools for global sensitivity analysis and apply these to spatial PVA.

  6. Chemistry in Protoplanetary Disks: A Sensitivity Analysis

    CERN Document Server

    Vasyunin, A I; Henning, T; Wakelam, V; Herbst, E; Sobolev, A M

    2007-01-01

    We study how uncertainties in the rate coefficients of chemical reactions in the RATE06 database affect abundances and column densities of key molecules in protoplanetary disks. We randomly varied the gas-phase reaction rates within their uncertainty limits and calculated the time-dependent abundances and column densities using a gas-grain chemical model and a flaring steady-state disk model. We find that key species can be separated into two distinct groups according to the sensitivity of their column densities to the rate uncertainties. The first group includes CO, C$^+$, H$_3^+$, H$_2$O, NH$_3$, N$_2$H$^+$, and HCNH$^+$. For these species, the column densities are not very sensitive to the rate uncertainties but the abundances in specific regions are. The second group includes CS, CO$_2$, HCO$^+$, H$_2$CO, C$_2$H, CN, HCN, HNC and other, more complex species, for which high abundances and abundance uncertainties co-exist in the same disk region, leading to larger scatters in the column densities. However, ...

  7. Differential item functioning analysis by applying multiple comparison procedures.

    Science.gov (United States)

    Eusebi, Paolo; Kreiner, Svend

    2015-01-01

    Analysis within a Rasch measurement framework aims at development of valid and objective test score. One requirement of both validity and objectivity is that items do not show evidence of differential item functioning (DIF). A number of procedures exist for the assessment of DIF including those based on analysis of contingency tables by Mantel-Haenszel tests and partial gamma coefficients. The aim of this paper is to illustrate Multiple Comparison Procedures (MCP) for analysis of DIF relative to a variable defining a very large number of groups, with an unclear ordering with respect to the DIF effect. We propose a single step procedure controlling the false discovery rate for DIF detection. The procedure applies for both dichotomous and polytomous items. In addition to providing evidence against a hypothesis of no DIF, the procedure also provides information on subset of groups that are homogeneous with respect to the DIF effect. A stepwise MCP procedure for this purpose is also introduced.

  8. Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics

    DEFF Research Database (Denmark)

    Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter

    2014-01-01

    We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, High-Dimensional Model Representation and Total Sensitivity Indices. The approach has a general applic...

  9. Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics

    DEFF Research Database (Denmark)

    Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter

    2013-01-01

    We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, High-Dimensional Model Representation and Total Sensitivity Indices. The approach has a general applic...

  10. Global and local sensitivity analysis methods for a physical system

    Energy Technology Data Exchange (ETDEWEB)

    Morio, Jerome, E-mail: jerome.morio@onera.fr [Onera-The French Aerospace Lab, F-91761, Palaiseau Cedex (France)

    2011-11-15

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.

  11. Electromagnetic and seismoelectric sensitivity analysis using resolution functions

    NARCIS (Netherlands)

    Maas, P.J.; Grobbe, N.; Slob, E.C.; Mulder, W.A.

    2015-01-01

    For multi-parameter problems, such as the seismoelectric system, sensitivity analysis through resolution functions is a low-cost, fast method of determining whether measured fields are sensitive to certain subsurface parameters. We define a seismoelectric resolution function for the inversion of a

  12. Global and Local Sensitivity Analysis Methods for a Physical System

    Science.gov (United States)

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  13. Adjoint sensitivity analysis of high frequency structures with Matlab

    CERN Document Server

    Bakr, Mohamed; Demir, Veysel

    2017-01-01

    This book covers the theory of adjoint sensitivity analysis and uses the popular FDTD (finite-difference time-domain) method to show how wideband sensitivities can be efficiently estimated for different types of materials and structures. It includes a variety of MATLAB® examples to help readers absorb the content more easily.

  14. Applying data mining for the analysis of breast cancer data.

    Science.gov (United States)

    Liou, Der-Ming; Chang, Wei-Pin

    2015-01-01

    Data mining, also known as Knowledge-Discovery in Databases (KDD), is the process of automatically searching large volumes of data for patterns. For instance, a clinical pattern might indicate a female who have diabetes or hypertension are easier suffered from stroke for 5 years in a future. Then, a physician can learn valuable knowledge from the data mining processes. Here, we present a study focused on the investigation of the application of artificial intelligence and data mining techniques to the prediction models of breast cancer. The artificial neural network, decision tree, logistic regression, and genetic algorithm were used for the comparative studies and the accuracy and positive predictive value of each algorithm were used as the evaluation indicators. 699 records acquired from the breast cancer patients at the University of Wisconsin, nine predictor variables, and one outcome variable were incorporated for the data analysis followed by the tenfold cross-validation. The results revealed that the accuracies of logistic regression model were 0.9434 (sensitivity 0.9716 and specificity 0.9482), the decision tree model 0.9434 (sensitivity 0.9615, specificity 0.9105), the neural network model 0.9502 (sensitivity 0.9628, specificity 0.9273), and the genetic algorithm model 0.9878 (sensitivity 1, specificity 0.9802). The accuracy of the genetic algorithm was significantly higher than the average predicted accuracy of 0.9612. The predicted outcome of the logistic regression model was higher than that of the neural network model but no significant difference was observed. The average predicted accuracy of the decision tree model was 0.9435 which was the lowest of all four predictive models. The standard deviation of the tenfold cross-validation was rather unreliable. This study indicated that the genetic algorithm model yielded better results than other data mining models for the analysis of the data of breast cancer patients in terms of the overall accuracy of

  15. High sensitivity microwave detection using a magnetic tunnel junction in the absence of an external applied magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Gui, Y. S.; Bai, L. H.; Hu, C.-M. [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2 (Canada); Xiao, Y.; Guo, H. [Department of Physics, Center for the Physics of Materials, McGill University, Montreal, Quebec H3A 2T8 (Canada); Hemour, S.; Zhao, Y. P.; Wu, K. [Ecole Polytechnique de Montreal, Montreal, Quebec H3T 1J4 (Canada); Houssameddine, D. [Everspin Technologies, 1347 N. Alma School Road, Chandler, Arizona 85224 (United States)

    2015-04-13

    In the absence of any external applied magnetic field, we have found that a magnetic tunnel junction (MTJ) can produce a significant output direct voltage under microwave radiation at frequencies, which are far from the ferromagnetic resonance condition, and this voltage signal can be increase by at least an order of magnitude by applying a direct current bias. The enhancement of the microwave detection can be explained by the nonlinear resistance/conductance of the MTJs. Our estimation suggests that optimized MTJs should achieve sensitivities for non-resonant broadband microwave detection of about 5000 mV/mW.

  16. Postal system strategy selection by applying multicriteria analysis

    Directory of Open Access Journals (Sweden)

    Petrović Vladeta

    2006-01-01

    Full Text Available The paper presents the methods by which postal organizations adapt strategically to significant changes in the external environment in order to survive on the market and gain new possibilities for prosperity. It is proved that only planned changes can accelerate the speed of adjustment to new conditions. By studying environment demands and the ways postal administrations have implemented changes, it can be concluded that there exists a general model of postal system transformation. After reviewing the available transformation scenarios that are applicable to all postal operators, the paper proposes the selection of postal system development strategy by applying multicriteria analysis.

  17. The Effects of Applied Stress and Sensitization on the Passive Film Stability of Al-Mg Alloys

    Science.gov (United States)

    2013-06-01

    selection: Non- ferrous alloys and special purpose metals,” ASM Handbook, vol. 2. ASM International, 1990. [20] B. Scott, “The role of stress in the...APPLIED STRESS AND SENSITIZATION ON THE PASSIVE FILM STABILITY OF AL-MG ALLOYS by Jennifer S. Fleming June 2013 Thesis Advisor: Luke N...STABILITY OF AL-MG ALLOYS 5. FUNDING NUMBERS 6. AUTHOR(S) Jennifer S. Fleming 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate

  18. Sensitivity Analysis of the Gap Heat Transfer Model in BISON.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard (INL); Perez, Danielle (INL)

    2014-10-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.

  19. Classical mechanics approach applied to analysis of genetic oscillators.

    Science.gov (United States)

    Vasylchenkova, Anastasiia; Mraz, Miha; Zimic, Nikolaj; Moskon, Miha

    2016-04-05

    Biological oscillators present a fundamental part of several regulatory mechanisms that control the response of various biological systems. Several analytical approaches for their analysis have been reported recently. They are, however, limited to only specific oscillator topologies and/or to giving only qualitative answers, i.e., is the dynamics of an oscillator given the parameter space oscillatory or not. Here we present a general analytical approach that can be applied to the analysis of biological oscillators. It relies on the projection of biological systems to classical mechanics systems. The approach is able to provide us with relatively accurate results in the meaning of type of behaviour system reflects (i.e. oscillatory or not) and periods of potential oscillations without the necessity to conduct expensive numerical simulations. We demonstrate and verify the proposed approach on three different implementations of amplified negative feedback oscillator.

  20. Sensitivity analysis on fuel scenario associated magnitudes

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Martinez, M.; Alvarez-Velarde, F.

    2014-07-01

    Nuclear fuel cycle scenario analyses are needed as a support for policy makers in terms of sustainability, fuel diversity, security of supply, and social and environmental effects. These analyses are usually aimed to the study of the impact of certain hypotheses on some fuel cycle indicators, without considering the uncertainties on those hypotheses. The expert group of the NEA/OECD on Advanced Fuel Cycle Scenarios, where this work is framed, is devoted to fill this gap, laying the foundations for deep analysis of the sensibilities on fuel cycle indicators. (Author)

  1. Dispersion sensitivity analysis & consistency improvement of APFSDS

    Directory of Open Access Journals (Sweden)

    Sangeeta Sharma Panda

    2017-08-01

    In Bore Balloting Motion simulation shows that reduction in residual spin by about 5% results in drastic 56% reduction in first maximum yaw. A correlation between first maximum yaw and residual spin is observed. Results of data analysis are used in design modification for existing ammunition. Number of designs are evaluated numerically before freezing five designs for further soundings. These designs are critically assessed in terms of their comparative performance during In-bore travel & external ballistics phase. Results are validated by free flight trials for the finalised design.

  2. Effect of Topically Applied Anaesthetic Formulation on the Sensitivity of Scoop Dehorning Wounds in Calves

    Science.gov (United States)

    McCarthy, Dominique; Windsor, Peter Andrew; Harris, Charissa; Lomax, Sabrina; White, Peter John

    2016-01-01

    The post-operative effects of three formulations of topical anaesthetic and a cornual nerve block on the sensitivity of scoop dehorning wounds in calves were compared in two trials. In Trial 1, 21 female Holstein dairy calves aged 8 to 24 weeks were randomly allocated to two groups: (1) scoop dehorning with a post-operative application of a novel topical anaesthetic powder (DTAP, n = 10); and (2) scoop dehorning with a post-operative application of a novel topical anaesthetic ethanol liquid (DTAE, n = 11). In Trial 2, 18 castrated male and 18 female Hereford beef calves aged 16 to 20 weeks were randomly allocated to four groups: (1) scoop dehorning with a pre-operative cornual nerve block of lignocaine (DCB, n = 9); (2) scoop dehorning with a post-operative application of the novel topical anaesthetic ethanol liquid from Trial 1 (DTAE, n = 9); (3) scoop dehorning with a post-operative application of a topical anaesthetic gel (DTAG, n = 9); and (4) sham dehorning (CON, n = 9). Sensitivity was assessed by scoring the behavioural response of calves to stimulation of the wound or skin at time points before and after treatment. In Trial 1, DTAP calves had a greater probability of displaying more severe responses than DTAE calves at 90 and 180 min (P wound following scoop dehorning in calves and may provide a practical option for pain relief on-farm. PMID:27648948

  3. Sensitivity analysis of large system of chemical kinetic parameters for engine combustion simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hsieh, H; Sanz-Argent, J; Petitpas, G; Havstad, M; Flowers, D

    2012-04-19

    In this study, the authors applied the state-of-the art sensitivity methods to downselect system parameters from 4000+ to 8, (23000+ -> 4000+ -> 84 -> 8). This analysis procedure paves the way for future works: (1) calibrate the system response using existed experimental observations, and (2) predict future experiment results, using the calibrated system.

  4. Stochastic sensitivity analysis using HDMR and score function

    Indian Academy of Sciences (India)

    Rajib Chowdhury; B N Rao; A Meher Prasad

    2009-12-01

    Probabilistic sensitivities provide an important insight in reliability analysis and often crucial towards understanding the physical behaviour underlying failure and modifying the design to mitigate and manage risk. This article presents a new computational approach for calculating stochastic sensitivities of mechanical systems with respect to distribution parameters of random variables. The method involves high dimensional model representation and score functions associated with probability distribution of a random input. The proposed approach facilitates first-and second-order approximation of stochastic sensitivity measures and statistical simulation. The formulation is general such that any simulation method can be used for the computation such as Monte Carlo, importance sampling, Latin hypercube, etc. Both the probabilistic response and its sensitivities can be estimated from a single probabilistic analysis, without requiring gradients of performance function. Numerical results indicate that the proposed method provides accurate and computationally efficient estimates of sensitivities of statistical moments or reliability of structural system.

  5. Shape analysis applied in heavy ion reactions near Fermi energy

    Science.gov (United States)

    Zhang, S.; Huang, M.; Wada, R.; Liu, X.; Lin, W.; Wang, J.

    2017-03-01

    A new method is proposed to perform shape analyses and to evaluate their validity in heavy ion collisions near the Fermi energy. In order to avoid erroneous values of shape parameters in the calculation, a test particle method is utilized in which each nucleon is represented by n test particles, similar to that used in the Boltzmann–Uehling–Uhlenbeck (BUU) calculations. The method is applied to the events simulated by an antisymmetrized molecular dynamics model. The geometrical shape of fragments is reasonably extracted when n = 100 is used. A significant deformation is observed for all fragments created in the multifragmentation process. The method is also applied to the shape of the momentum distribution for event classification. In the momentum case, the errors in the eigenvalue calculation become much smaller than those of the geometrical shape analysis and the results become similar between those with and without the test particle method, indicating that in intermediate heavy ion collisions the shape analysis of momentum distribution can be used for the event classification without the test particle method.

  6. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    Science.gov (United States)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  7. Automated SEM Modal Analysis Applied to the Diogenites

    Science.gov (United States)

    Bowman, L. E.; Spilde, M. N.; Papike, James J.

    1996-01-01

    Analysis of volume proportions of minerals, or modal analysis, is routinely accomplished by point counting on an optical microscope, but the process, particularly on brecciated samples such as the diogenite meteorites, is tedious and prone to error by misidentification of very small fragments, which may make up a significant volume of the sample. Precise volume percentage data can be gathered on a scanning electron microscope (SEM) utilizing digital imaging and an energy dispersive spectrometer (EDS). This form of automated phase analysis reduces error, and at the same time provides more information than could be gathered using simple point counting alone, such as particle morphology statistics and chemical analyses. We have previously studied major, minor, and trace-element chemistry of orthopyroxene from a suite of diogenites. This abstract describes the method applied to determine the modes on this same suite of meteorites and the results of that research. The modal abundances thus determined add additional information on the petrogenesis of the diogenites. In addition, low-abundance phases such as spinels were located for further analysis by this method.

  8. Sensitivity Analysis and Insights into Hydrological Processes and Uncertainty at Different Scales

    Science.gov (United States)

    Haghnegahdar, A.; Razavi, S.; Wheater, H. S.; Gupta, H. V.

    2015-12-01

    Sensitivity analysis (SA) is an essential tool for providing insight into model behavior, and conducting model calibration and uncertainty assessment. Numerous techniques have been used in environmental modelling studies for sensitivity analysis. However, it is often overlooked that the scale of modelling study, and the metric choice can significantly change the assessment of model sensitivity and uncertainty. In order to identify important hydrological processes across various scales, we conducted a multi-criteria sensitivity analysis using a novel and efficient technique, Variogram Analysis of Response Surfaces (VARS). The analysis was conducted using three different hydrological models, HydroGeoSphere (HGS), Soil and Water Assessment Tool (SWAT), and Modélisation Environmentale-Surface et Hydrologie (MESH). Models were applied at various scales ranging from small (hillslope) to large (watershed) scales. In each case, the sensitivity of simulated streamflow to model processes (represented through parameters) were measured using different metrics selected based on various hydrograph characteristics such as high flows, low flows, and volume. We demonstrate how the scale of the case study and the choice of sensitivity metric(s) can change our assessment of sensitivity and uncertainty. We present some guidelines to better align the metric choice with the objective and scale of a modelling study.

  9. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    Science.gov (United States)

    Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.

    2016-12-01

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  10. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    KAUST Repository

    Navarro Jimenez, M.

    2016-12-26

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  11. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    Science.gov (United States)

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  12. Evaluation of bitterness in white wine applying descriptive analysis, time-intensity analysis, and temporal dominance of sensations analysis.

    Science.gov (United States)

    Sokolowsky, Martina; Fischer, Ulrich

    2012-06-30

    Bitterness in wine, especially in white wine, is a complex and sensitive topic as it is a persistent sensation with negative connotation by consumers. However, the molecular base for bitter taste in white wines is still widely unknown yet. At the same time studies dealing with bitterness have to cope with the temporal dynamics of bitter perception. The most common method to describe bitter taste is the static measurement amongst other attributes during a descriptive analysis. A less frequently applied method, the time-intensity analysis, evaluates the temporal gustatory changes focusing on bitterness alone. The most recently developed multidimensional approach of the temporal dominance of sensations method reveals the temporal dominance of bitter taste in relation to other attributes. In order to compare the results comprised with these different sensory methodologies, 13 commercial white wines were evaluated by the same panel. To facilitate a statistical comparison, parameters were extracted from bitterness curves obtained from time-intensity and temporal dominance of sensations analysis and were compared to bitter intensity as well as bitter persistency based on descriptive analysis. Analysis of variance differentiated significantly the wines regarding all measured bitterness parameters obtained from the three sensory techniques. Comparing the information of all sensory parameters by multiple factor analysis and correlation, each technique provided additional valuable information regarding the complex bitter perception in white wine.

  13. Application of simplified model to sensitivity analysis of solidification process

    Directory of Open Access Journals (Sweden)

    R. Szopa

    2007-12-01

    Full Text Available The sensitivity models of thermal processes proceeding in the system casting-mould-environment give the essential information concerning the influence of physical and technological parameters on a course of solidification. Knowledge of time-dependent sensitivity field is also very useful in a case of inverse problems numerical solution. The sensitivity models can be constructed using the direct approach, this means by differentiation of basic energy equations and boundary-initial conditions with respect to parameter considered. Unfortunately, the analytical form of equations and conditions obtained can be very complex both from the mathematical and numerical points of view. Then the other approach consisting in the application of differential quotient can be applied. In the paper the exact and approximate approaches to the modelling of sensitivity fields are discussed, the examples of computations are also shown.

  14. A Novel Multiobjective Optimization Method Based on Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Tiane Li

    2016-01-01

    Full Text Available For multiobjective optimization problems, different optimization variables have different influences on objectives, which implies that attention should be paid to the variables according to their sensitivity. However, previous optimization studies have not considered the variables sensitivity or conducted sensitivity analysis independent of optimization. In this paper, an integrated algorithm is proposed, which combines the optimization method SPEA (Strength Pareto Evolutionary Algorithm with the sensitivity analysis method SRCC (Spearman Rank Correlation Coefficient. In the proposed algorithm, the optimization variables are worked as samples of sensitivity analysis, and the consequent sensitivity result is used to guide the optimization process by changing the evolutionary parameters. Three cases including a mathematical problem, an airship envelope optimization, and a truss topology optimization are used to demonstrate the computational efficiency of the integrated algorithm. The results showed that this algorithm is able to simultaneously achieve parameter sensitivity and a well-distributed Pareto optimal set, without increasing the computational time greatly in comparison with the SPEA method.

  15. Context Sensitive Article Ranking with Citation Context Analysis

    CERN Document Server

    Doslu, Metin

    2015-01-01

    It is hard to detect important articles in a specific context. Information retrieval techniques based on full text search can be inaccurate to identify main topics and they are not able to provide an indication about the importance of the article. Generating a citation network is a good way to find most popular articles but this approach is not context aware. The text around a citation mark is generally a good summary of the referred article. So citation context analysis presents an opportunity to use the wisdom of crowd for detecting important articles in a context sensitive way. In this work, we analyze citation contexts to rank articles properly for a given topic. The model proposed uses citation contexts in order to create a directed and weighted citation network based on the target topic. We create a directed and weighted edge between two articles if citation context contains terms related with the target topic. Then we apply common ranking algorithms in order to find important articles in this newly cre...

  16. Comparative study of TiO2 nanoparticles applied to dye-sensitized solar cells

    Science.gov (United States)

    Yacoubi, Besma; Bennaceur, Jamila; Ben Taieb, S.; Chtourou, Rathowan

    2014-02-01

    Microcrystalline titanium oxide (TiO2) particles of anatase crystal phase were prepared by the sol-gel route, varying thermal treatment conditions (400 °C and 600 °C), for a comparison purpose with commercial TiO2 (P25). Structural, optical and electrical properties were investigated for dye-sensitized solar cells (DSSCs) application. Both microcrystalline TiO2 particles, synthesized by the sol-gel method and obtained from the P25 powder were used to prepare a light scattering layer of the working electrode. The obtained electrodes were then immersed in a solution of N-719 (ruthenium) dye, at the ambient temperature, during 24 h. Finally, the DSSCs were assembled, the short circuit photocurrent, the open circuit photovoltage, and the power conversion efficiency were measured using an I-V measurement system. The overall conversion efficiencies for all elaborated DSSCs were proximate. A maximum efficiency of 2.3% was achieved for the sol-gel TiO2 thin film annealed at 400 °C, under one sun irradiation, with an open circuit voltage of 0.61 V and a current density of 6.54 mA/cm2. The higher efficiency value of the sol-gel TiO2 sample, annealed at 400 °C, was attributed to the uniformity of the prepared titanium oxide substrate, which provides a better surface for the dye absorption.

  17. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    Energy Technology Data Exchange (ETDEWEB)

    Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)

    2011-04-15

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  18. Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae [NESS, Daejeon (Korea, Republic of)

    2016-10-15

    Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed.

  19. Receiver function analysis applied to refraction survey data

    Science.gov (United States)

    Subaru, T.; Kyosuke, O.; Hitoshi, M.

    2008-12-01

    For the estimation of the thickness of oceanic crust or petrophysical investigation of subsurface material, refraction or reflection seismic exploration is one of the methods frequently practiced. These explorations use four-component (x,y,z component of acceleration and pressure) seismometer, but only compressional wave or vertical component of seismometers tends to be used in the analyses. Hence, it is needed to use shear wave or lateral component of seismograms for more precise investigation to estimate the thickness of oceanic crust. Receiver function is a function at a place that can be used to estimate the depth of velocity interfaces by receiving waves from teleseismic signal including shear wave. Receiver function analysis uses both vertical and horizontal components of seismograms and deconvolves the horizontal with the vertical to estimate the spectral difference of P-S converted waves arriving after the direct P wave. Once the phase information of the receiver function is obtained, then one can estimate the depth of the velocity interface. This analysis has advantage in the estimation of the depth of velocity interface including Mohorovicic discontinuity using two components of seismograms when P-to-S converted waves are generated at the interface. Our study presents results of the preliminary study using synthetic seismograms. First, we use three types of geological models that are composed of a single sediment layer, a crust layer, and a sloped Moho, respectively, for underground sources. The receiver function can estimate the depth and shape of Moho interface precisely for the three models. Second, We applied this method to synthetic refraction survey data generated not by earthquakes but by artificial sources on the ground or sea surface. Compressional seismic waves propagate under the velocity interface and radiate converted shear waves as well as at the other deep underground layer interfaces. However, the receiver function analysis applied to the

  20. Applied research and development of neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Yong Sam; Moon, Jong Hwa; Kim, Sun Ha; Bak, Sung Ryel; Park, Yong Chul; Kim, Young Ki; Chung, Hwan Sung; Park, Kwang Won; Kang, Sang Hun

    2000-05-01

    This report is written for results of research and development as follows : improvement of neutron irradiation facilities, counting system and development of automation system and capsules for NAA in HANARO ; improvement of analytical procedures and establishment of analytical quality control and assurance system; applied research and development of environment, industry and human health and its standardization. For identification and standardization of analytical method, environmental biological samples and polymer are analyzed and uncertainity of measurement are estimated. Also data intercomparison and proficency test were performed. Using airborne particulate matter chosen as a environmental indicators, trace elemental concentrations of sample collected at urban and rural site are determined and then the calculation of statistics and the factor analysis are carried out for investigation of emission source. International cooperation research project was carried out for utilization of nuclear techniques.

  1. Downside Risk analysis applied to Hedge Funds universe

    CERN Document Server

    Perello, J

    2006-01-01

    Hedge Funds are considered as one of the portfolio management sectors which shows a fastest growing for the past decade. An optimal Hedge Fund management requires a high precision risk evaluation and an appropriate risk metrics. The classic CAPM theory and its Ratio Sharpe fail to capture some crucial aspects due to the strong non-Gaussian character of Hedge Funds statistics. A possible way out to this problem while keeping CAPM simplicity is the so-called Downside Risk analysis. One important benefit lies in distinguishing between good and bad returns, that is: returns greater (or lower) than investor's goal. We study several risk indicators using the Gaussian case as a benchmark and apply them to the Credit Suisse/Tremont Investable Hedge Fund Index Data.

  2. Comparison Between two FMEA Analysis Applied to Dairy

    Directory of Open Access Journals (Sweden)

    Alexandre de Paula Peres

    2010-06-01

    Full Text Available The FMEA (Failure Mode and Effect Analysis is a methodology that has been used in environmental risk assessment during the production process. Although the environmental certification means strengthening corporate image and ensuring their stay in the market, it is still very costly, particularly for small and medium businesses. Given this, the FMEA can be a benchmark for companies to start to diagnose the environmental risk caused by them. This methodology was used to diagnose differences in environmental concern and environmental controls exercised in two dairy plants from Lavras. By applying this method, one can observe different applications on the tables found in business: diagnosis and confirmation of the risks of controls taken.

  3. Finite element analysis applied to dentoalveolar trauma: methodology description.

    Science.gov (United States)

    da Silva, B R; Moreira Neto, J J S; da Silva, F I; de Aguiar, A S W

    2011-01-01

    Dentoalveolar traumatic injuries are among the clinical conditions most frequently treated in dental practice. However, few studies so far have addressed the biomechanical aspects of these events, probably as a result of difficulties in carrying out satisfactory experimental and clinical studies as well as the unavailability of truly scientific methodologies. The aim of this paper was to describe the use of finite element analysis applied to the biomechanical evaluation of dentoalveolar trauma. For didactic purposes, the methodological process was divided into steps that go from the creation of a geometric model to the evaluation of final results, always with a focus on methodological characteristics, advantages, and disadvantages, so as to allow the reader to customize the methodology according to specific needs. Our description shows that the finite element method can faithfully reproduce dentoalveolar trauma, provided the methodology is closely followed and thoroughly evaluated.

  4. Seismic analysis applied to the delimiting of a gas reservoir

    Energy Technology Data Exchange (ETDEWEB)

    Ronquillo, G.; Navarro, M.; Lozada, M.; Tafolla, C. [Instituto Mexicano del Petroleo, Eje Lazaro Cardenas (Mexico)

    1996-08-01

    We present the results of correlating seismic models with petrophysical parameters and well logs to mark the limits of a gas reservoir in sand lenses. To fulfill the objectives of the study, we used a data processing sequence that included wavelet manipulation, complex trace attributes and pseudovelocities inversion, along with several quality control schemes to insure proper amplitude preservation. Based on the analysis and interpretation of the seismic sections, several areas of interest were selected to apply additional signal treatment as preconditioning for petrophysical inversion. Signal classification was performed to control the amplitudes along the horizons of interest, and to be able to find an indirect interpretation of lithologies. Additionally, seismic modeling was done to support the results obtained and to help integrate the interpretation. The study proved to be a good auxiliary tool in the location of the probable extension of the gas reservoir in sand lenses.

  5. Acoustic design sensitivity analysis of structural sound radiation

    Institute of Scientific and Technical Information of China (English)

    许智生

    2009-01-01

    This paper presents an acoustic design sensitivity(ADS)analysis on sound radiation of structures by using the boundary element method(BEM).We calculated the velocity distribution of the thin plate by analytical method and the surface sound pressure by Rayleigh integral,and expressed the sound radiation power of the structure in a positive definite quadratic form of the Hermitian with an impedance matrix.The ADS analysis of the plate was thus translated into the analysis of structure dynamic sensitivity and ...

  6. A tool model for predicting atmospheric kinetics with sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.

  7. Site-specific estimates of water yield applied in regional acid sensitivity surveys across western Canada

    Directory of Open Access Journals (Sweden)

    Patrick D. SHAW

    2010-08-01

    Full Text Available Runoff or water yield is an important input to the Steady-State Water Chemistry (SSWC model for estimating critical loads of acidity. Herein, we present site-specific water yield estimates for a large number of lakes (779 across three provinces of western Canada (Manitoba, Saskatchewan, and British Columbia using an isotope mass balance (IMB approach. We explore the impact of applying site-specific hydrology as compared to use of regional runoff estimates derived from gridded datasets in assessing critical loads of acidity to these lakes. In general, the average water yield derived from IMB is similar to the long-term average runoff; however, IMB results suggest a much larger range in hydrological settings of the lakes, attributed to spatial heterogeneity in watershed characteristics and landcover. The comparison of critical loads estimates from the two methods suggests that use of average regional runoff data in the SSWC model may overestimate critical loads for the majority of lakes due to systematic skewness in the actual runoff distributions. Implications for use of site-specific hydrology in regional critical loads assessments across western Canada are discussed.

  8. A strategy to apply quantitative epistasis analysis on developmental traits.

    Science.gov (United States)

    Labocha, Marta K; Yuan, Wang; Aleman-Meza, Boanerges; Zhong, Weiwei

    2017-05-15

    Genetic interactions are keys to understand complex traits and evolution. Epistasis analysis is an effective method to map genetic interactions. Large-scale quantitative epistasis analysis has been well established for single cells. However, there is a substantial lack of such studies in multicellular organisms and their complex phenotypes such as development. Here we present a method to extend quantitative epistasis analysis to developmental traits. In the nematode Caenorhabditis elegans, we applied RNA interference on mutants to inactivate two genes, used an imaging system to quantitatively measure phenotypes, and developed a set of statistical methods to extract genetic interactions from phenotypic measurement. Using two different C. elegans developmental phenotypes, body length and sex ratio, as examples, we showed that this method could accommodate various metazoan phenotypes with performances comparable to those methods in single cell growth studies. Comparing with qualitative observations, this method of quantitative epistasis enabled detection of new interactions involving subtle phenotypes. For example, several sex-ratio genes were found to interact with brc-1 and brd-1, the orthologs of the human breast cancer genes BRCA1 and BARD1, respectively. We confirmed the brc-1 interactions with the following genes in DNA damage response: C34F6.1, him-3 (ortholog of HORMAD1, HORMAD2), sdc-1, and set-2 (ortholog of SETD1A, SETD1B, KMT2C, KMT2D), validating the effectiveness of our method in detecting genetic interactions. We developed a reliable, high-throughput method for quantitative epistasis analysis of developmental phenotypes.

  9. Sensitivity Analysis of a Dynamical System Using C++

    Directory of Open Access Journals (Sweden)

    Donna Calhoun

    1993-01-01

    Full Text Available This article introduces basic principles of first order sensitivity analysis and presents an algorithm that can be used to compute the sensitivity of a dynamical system to a selected parameter. This analysis is performed by extending with sensitivity equations the set of differential equations describing the dynamical system. These additional equations require the evaluation of partial derivatives, and so a technique known as the table algorithm, which can be used to exactly and automatically compute these derivatives, is described. A C++ class which can be used to implement the table algorithm is presented along with a driver routine for evaluating the output of a model and its sensitivity to a single parameter. The use of this driver routine is illustrated with a specific application from environmental hazards modeling.

  10. Applying under-sampling techniques and cost-sensitive learning methods on risk assessment of breast cancer.

    Science.gov (United States)

    Hsu, Jia-Lien; Hung, Ping-Cheng; Lin, Hung-Yen; Hsieh, Chung-Ho

    2015-04-01

    Breast cancer is one of the most common cause of cancer mortality. Early detection through mammography screening could significantly reduce mortality from breast cancer. However, most of screening methods may consume large amount of resources. We propose a computational model, which is solely based on personal health information, for breast cancer risk assessment. Our model can be served as a pre-screening program in the low-cost setting. In our study, the data set, consisting of 3976 records, is collected from Taipei City Hospital starting from 2008.1.1 to 2008.12.31. Based on the dataset, we first apply the sampling techniques and dimension reduction method to preprocess the testing data. Then, we construct various kinds of classifiers (including basic classifiers, ensemble methods, and cost-sensitive methods) to predict the risk. The cost-sensitive method with random forest classifier is able to achieve recall (or sensitivity) as 100 %. At the recall of 100 %, the precision (positive predictive value, PPV), and specificity of cost-sensitive method with random forest classifier was 2.9 % and 14.87 %, respectively. In our study, we build a breast cancer risk assessment model by using the data mining techniques. Our model has the potential to be served as an assisting tool in the breast cancer screening.

  11. To Apply or Not to Apply: A Survey Analysis of Grant Writing Costs and Benefits

    CERN Document Server

    von Hippel, Ted

    2015-01-01

    We surveyed 113 astronomers and 82 psychologists active in applying for federally funded research on their grant-writing history between January, 2009 and November, 2012. We collected demographic data, effort levels, success rates, and perceived non-financial benefits from writing grant proposals. We find that the average proposal takes 116 PI hours and 55 CI hours to write; although time spent writing was not related to whether the grant was funded. Effort did translate into success, however, as academics who wrote more grants received more funding. Participants indicated modest non-monetary benefits from grant writing, with psychologists reporting a somewhat greater benefit overall than astronomers. These perceptions of non-financial benefits were unrelated to how many grants investigators applied for, the number of grants they received, or the amount of time they devoted to writing their proposals. We also explored the number of years an investigator can afford to apply unsuccessfully for research grants a...

  12. Analysis of implicit and explicit lattice sensitivities using DRAGON

    Energy Technology Data Exchange (ETDEWEB)

    Ball, M.R., E-mail: ballmr@mcmaster.ca; Novog, D.R., E-mail: novog@mcmaster.ca; Luxat, J.C., E-mail: luxatj@mcmaster.ca

    2013-12-15

    Highlights: • We developed a way to propagate point-wise perturbations using only WIMS-D4 multigroup data. • The method inherently includes treatment of multi-group implicit sensitivities. • We compared our calculated sensitivities to an industry standard tool (TSUNAMI-1D). • In general, our results agreed well with TSUNAMI-1D. - Abstract: Deterministic lattice physics transport calculations are used extensively within the context of operational and safety analysis of nuclear power plants. As such the sensitivity and uncertainty in the evaluated nuclear data used to predict neutronic interactions and other key transport phenomena are critical topics for research. Sensitivity analysis of nuclear systems with respect to fundamental nuclear data using multi-energy-group discretization is complicated by the dilution dependency of multi-group macroscopic cross-sections as a result of resonance self-shielding. It has become common to group sensitivities into implicit and explicit effects to aid in the understanding of the nature of the sensitivities involved in the calculations, however the overall sensitivity is an integral of these effects. Explicit effects stem from perturbations performed for a specific nuclear data for a given isotope and at a specific energy, and their direct impact on the end figure of merit. Implicit effects stem from resonance self-shielding effects and can change the nature of their own sensitivities at other energies, or that for other reactions or even other isotopes. Quantification of the implicit sensitivity component involves some manner of treatment of resonance parameters in a way that is self-consistent with perturbations occurring in associated multi-group cross-sections. A procedure for assessing these implicit effects is described in the context of the Bondarenko method of self-shielding and implemented using a WIMS-D4 multi-group nuclear library and the lattice solver DRAGON. The resulting sensitivity results were compared

  13. Multivariate Statistical Analysis Applied in Wine Quality Evaluation

    Directory of Open Access Journals (Sweden)

    Jieling Zou

    2015-08-01

    Full Text Available This study applies multivariate statistical approaches to wine quality evaluation. With 27 red wine samples, four factors were identified out of 12 parameters by principal component analysis, explaining 89.06% of the total variance of data. As iterative weights calculated by the BP neural network revealed little difference from weights determined by information entropy method, the latter was chosen to measure the importance of indicators. Weighted cluster analysis performs well in classifying the sample group further into two sub-clusters. The second cluster of red wine samples, compared with its first, was lighter in color, tasted thinner and had fainter bouquet. Weighted TOPSIS method was used to evaluate the quality of wine in each sub-cluster. With scores obtained, each sub-cluster was divided into three grades. On the whole, the quality of lighter red wine was slightly better than the darker category. This study shows the necessity and usefulness of multivariate statistical techniques in both wine quality evaluation and parameter selection.

  14. Sensitivity analysis of a branching process evolving on a network with application in epidemiology

    CERN Document Server

    Hautphenne, Sophie; Delvenne, Jean-Charles; Blondel, Vincent D

    2015-01-01

    We perform an analytical sensitivity analysis for a model of a continuous-time branching process evolving on a fixed network. This allows us to determine the relative importance of the model parameters to the growth of the population on the network. We then apply our results to the early stages of an influenza-like epidemic spreading among a set of cities connected by air routes in the United States. We also consider vaccination and analyze the sensitivity of the total size of the epidemic with respect to the fraction of vaccinated people. Our analysis shows that the epidemic growth is more sensitive with respect to transmission rates within cities than travel rates between cities. More generally, we highlight the fact that branching processes offer a powerful stochastic modeling tool with analytical formulas for sensitivity which are easy to use in practice.

  15. Lock Acquisition and Sensitivity Analysis of Advanced LIGO Interferometers

    Science.gov (United States)

    Martynov, Denis

    real time. Sensitivity analysis was done to understand and eliminate noise sources of the instrument. The coupling of noise sources to gravitational wave channel can be reduced if robust feedforward and optimal feedback control loops are implemented. Static and adaptive feedforward noise cancellation techniques applied to Advanced LIGO interferometers and tested at the 40m prototype are described in the last part of this thesis. Applications of optimal time domain feedback control techniques and estimators to aLIGO control loops are also discussed. Commissioning work is still ongoing at the sites. First science run of advanced LIGO is planned for September 2015 and will last for 3-4 months. This run will be followed by a set of small instrument upgrades that will be installed on a time scale of few months. Second science run will start in spring 2016 and last for about six months. Since current sensitivity of advanced LIGO is already more than a factor of 3 higher compared to initial detectors and keeps improving on a monthly basis, the upcoming science runs have a good chance for the first direct detection of gravitational waves.

  16. Noise analysis for sensitivity-based structural damage detection

    Institute of Scientific and Technical Information of China (English)

    YIN Tao; ZHU Hong-ping; YU Ling

    2007-01-01

    As vibration-based structural damage detection methods are easily affected by environmental noise, a new statistic-based noise analysis method is proposed together with the Monte Carlo technique to investigate the influence of experimental noise of modal data on sensitivity-based damage detection methods. Different from the commonly used random perturbation technique, the proposed technique is deduced directly by Moore-Penrose generalized inverse of the sensitivity matrix, which does not only make the analysis process more efficient but also can analyze the influence of noise on both frequencies and mode shapes for three commonly used sensitivity-based damage detection methods in a similar way. A one-story portal frame is adopted to evaluate the efficiency of the proposed noise analysis technique.

  17. Sensitivity analysis for missing data in regulatory submissions.

    Science.gov (United States)

    Permutt, Thomas

    2016-07-30

    The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  18. Sensitivity analysis of a sound absorption model with correlated inputs

    Science.gov (United States)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  19. Sensitivity analysis approach to multibody systems described by natural coordinates

    Science.gov (United States)

    Li, Xiufeng; Wang, Yabin

    2014-03-01

    The classical natural coordinate modeling method which removes the Euler angles and Euler parameters from the governing equations is particularly suitable for the sensitivity analysis and optimization of multibody systems. However, the formulation has so many principles in choosing the generalized coordinates that it hinders the implementation of modeling automation. A first order direct sensitivity analysis approach to multibody systems formulated with novel natural coordinates is presented. Firstly, a new selection method for natural coordinate is developed. The method introduces 12 coordinates to describe the position and orientation of a spatial object. On the basis of the proposed natural coordinates, rigid constraint conditions, the basic constraint elements as well as the initial conditions for the governing equations are derived. Considering the characteristics of the governing equations, the newly proposed generalized-α integration method is used and the corresponding algorithm flowchart is discussed. The objective function, the detailed analysis process of first order direct sensitivity analysis and related solving strategy are provided based on the previous modeling system. Finally, in order to verify the validity and accuracy of the method presented, the sensitivity analysis of a planar spinner-slider mechanism and a spatial crank-slider mechanism are conducted. The test results agree well with that of the finite difference method, and the maximum absolute deviation of the results is less than 3%. The proposed approach is not only convenient for automatic modeling, but also helpful for the reduction of the complexity of sensitivity analysis, which provides a practical and effective way to obtain sensitivity for the optimization problems of multibody systems.

  20. Carbon dioxide capture processes: Simulation, design and sensitivity analysis

    DEFF Research Database (Denmark)

    Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul

    2012-01-01

    performance of the process to the L/G ratio to the absorber, CO2 lean solvent loadings, and striper pressure are presented in this paper. Based on the sensitivity analysis process optimization problems have been defined and solved and, a preliminary control structure selection has been made.......Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...

  1. Sensitivity analysis of the fission gas behavior model in BISON.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Pastore, Giovanni; Perez, Danielle; Williamson, Richard

    2013-05-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of a new model for the fission gas behavior (release and swelling) in the BISON fuel performance code of Idaho National Laboratory. Using the new model in BISON, the sensitivity of the calculated fission gas release and swelling to the involved parameters and the associated uncertainties is investigated. The study results in a quantitative assessment of the role of intrinsic uncertainties in the analysis of fission gas behavior in nuclear fuel.

  2. Carbon dioxide capture processes: Simulation, design and sensitivity analysis

    DEFF Research Database (Denmark)

    Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul

    2012-01-01

    performance of the process to the L/G ratio to the absorber, CO2 lean solvent loadings, and striper pressure are presented in this paper. Based on the sensitivity analysis process optimization problems have been defined and solved and, a preliminary control structure selection has been made.......Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...

  3. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    Science.gov (United States)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  4. To apply or not to apply: a survey analysis of grant writing costs and benefits.

    Science.gov (United States)

    von Hippel, Ted; von Hippel, Courtney

    2015-01-01

    We surveyed 113 astronomers and 82 psychologists active in applying for federally funded research on their grant-writing history between January, 2009 and November, 2012. We collected demographic data, effort levels, success rates, and perceived non-financial benefits from writing grant proposals. We find that the average proposal takes 116 PI hours and 55 CI hours to write; although time spent writing was not related to whether the grant was funded. Effort did translate into success, however, as academics who wrote more grants received more funding. Participants indicated modest non-monetary benefits from grant writing, with psychologists reporting a somewhat greater benefit overall than astronomers. These perceptions of non-financial benefits were unrelated to how many grants investigators applied for, the number of grants they received, or the amount of time they devoted to writing their proposals. We also explored the number of years an investigator can afford to apply unsuccessfully for research grants and our analyses suggest that funding rates below approximately 20%, commensurate with current NIH and NSF funding, are likely to drive at least half of the active researchers away from federally funded research. We conclude with recommendations and suggestions for individual investigators and for department heads.

  5. To apply or not to apply: a survey analysis of grant writing costs and benefits.

    Directory of Open Access Journals (Sweden)

    Ted von Hippel

    Full Text Available We surveyed 113 astronomers and 82 psychologists active in applying for federally funded research on their grant-writing history between January, 2009 and November, 2012. We collected demographic data, effort levels, success rates, and perceived non-financial benefits from writing grant proposals. We find that the average proposal takes 116 PI hours and 55 CI hours to write; although time spent writing was not related to whether the grant was funded. Effort did translate into success, however, as academics who wrote more grants received more funding. Participants indicated modest non-monetary benefits from grant writing, with psychologists reporting a somewhat greater benefit overall than astronomers. These perceptions of non-financial benefits were unrelated to how many grants investigators applied for, the number of grants they received, or the amount of time they devoted to writing their proposals. We also explored the number of years an investigator can afford to apply unsuccessfully for research grants and our analyses suggest that funding rates below approximately 20%, commensurate with current NIH and NSF funding, are likely to drive at least half of the active researchers away from federally funded research. We conclude with recommendations and suggestions for individual investigators and for department heads.

  6. Uncertainty Quantification and Sensitivity Analysis of Transonic Aerodynamics with Geometric Uncertainty

    Directory of Open Access Journals (Sweden)

    Xiaojing Wu

    2017-01-01

    Full Text Available Airfoil geometric uncertainty can generate aerodynamic characteristics fluctuations. Uncertainty quantification is applied to compute its impact on the aerodynamic characteristics. In addition, the contribution of each uncertainty variable to aerodynamic characteristics should be computed by the uncertainty sensitivity analysis. In the paper, Sobol’s analysis is used for uncertainty sensitivity analysis and a nonintrusive polynomial chaos method is used for uncertainty quantification and Sobol’s analysis. It is difficult to describe geometric uncertainty because it needs a lot of input parameters. In order to alleviate the contradiction between the variable dimension and computational cost, a principal component analysis is introduced to describe geometric uncertainty of airfoil. Through this technique, the number of input uncertainty variables can be reduced and typical global deformation modes can be obtained. By uncertainty quantification, we can learn that the flow characteristics of shock wave and boundary layer separation are sensitive to the geometric uncertainty in transonic region, which is the main reason that transonic drag is sensitive to the geometric uncertainty. The sensitivity analysis shows that the model can be simplified by eliminating unimportant geometric modes. Moreover, which are the most important geometric modes to transonic aerodynamics can be learnt. This is very helpful for airfoil design.

  7. Correlation network analysis applied to complex biofilm communities.

    Directory of Open Access Journals (Sweden)

    Ana E Duran-Pinedo

    Full Text Available The complexity of the human microbiome makes it difficult to reveal organizational principles of the community and even more challenging to generate testable hypotheses. It has been suggested that in the gut microbiome species such as Bacteroides thetaiotaomicron are keystone in maintaining the stability and functional adaptability of the microbial community. In this study, we investigate the interspecies associations in a complex microbial biofilm applying systems biology principles. Using correlation network analysis we identified bacterial modules that represent important microbial associations within the oral community. We used dental plaque as a model community because of its high diversity and the well known species-species interactions that are common in the oral biofilm. We analyzed samples from healthy individuals as well as from patients with periodontitis, a polymicrobial disease. Using results obtained by checkerboard hybridization on cultivable bacteria we identified modules that correlated well with microbial complexes previously described. Furthermore, we extended our analysis using the Human Oral Microbe Identification Microarray (HOMIM, which includes a large number of bacterial species, among them uncultivated organisms present in the mouth. Two distinct microbial communities appeared in healthy individuals while there was one major type in disease. Bacterial modules in all communities did not overlap, indicating that bacteria were able to effectively re-associate with new partners depending on the environmental conditions. We then identified hubs that could act as keystone species in the bacterial modules. Based on those results we then cultured a not-yet-cultivated microorganism, Tannerella sp. OT286 (clone BU063. After two rounds of enrichment by a selected helper (Prevotella oris OT311 we obtained colonies of Tannerella sp. OT286 growing on blood agar plates. This system-level approach would open the possibility of

  8. Skeletal mechanism generation for surrogate fuels using directed relation graph with error propagation and sensitivity analysis

    CERN Document Server

    Niemeyer, Kyle E; Raju, Mandhapati P

    2016-01-01

    A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane ...

  9. An uncertainty analysis of the PVT gauging method applied to sub-critical cryogenic propellant tanks

    Energy Technology Data Exchange (ETDEWEB)

    Van Dresar, Neil T. [NASA Glenn Research Center, Cleveland, OH (United States)

    2004-08-01

    The PVT (pressure, volume, temperature) method of liquid quantity gauging in low-gravity is based on gas law calculations assuming conservation of pressurant gas within the propellant tank and the pressurant supply bottle. There is interest in applying this method to cryogenic propellant tanks since the method requires minimal additional hardware or instrumentation. To use PVT with cryogenic fluids, a non-condensable pressurant gas (helium) is required. With cryogens, there will be a significant amount of propellant vapor mixed with the pressurant gas in the tank ullage. This condition, along with the high sensitivity of propellant vapor pressure to temperature, makes the PVT method susceptible to substantially greater measurement uncertainty than is the case with less volatile propellants. A conventional uncertainty analysis is applied to example cases of liquid hydrogen and liquid oxygen tanks. It appears that the PVT method may be feasible for liquid oxygen. Acceptable accuracy will be more difficult to obtain with liquid hydrogen. (Author)

  10. SUCCESS CONCEPT ANALYSIS APPLIED TO THE INFORMATION TECHNOLOGY PROJECT MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Cassio C. Montenegro Duarte

    2012-05-01

    Full Text Available This study evaluates the concept of success in project management that is applicable to the IT universe, from the classical theory associated with the techniques of project management. Therefore, it applies the theoretical analysis associated to the context of information technology in enterprises as well as the classic literature of traditional project management, focusing on its application in business information technology. From the literature developed in the first part of the study, four propositions were prepared for study which formed the basis for the development of the field research with three large companies that develop projects of Information Technology. The methodology used in the study predicted the development of the multiple case study. Empirical evidence suggests that the concept of success found in the classical literature in project management adjusts to the environment management of IT projects. Showed that it is possible to create the model of standard IT projects in order to replicate it in future derivatives projects, which depends on the learning acquired at the end of a long and continuous process and sponsorship of senior management, which ultimately results in its merger into the company culture.

  11. Applying Stylometric Analysis Techniques to Counter Anonymity in Cyberspace

    Directory of Open Access Journals (Sweden)

    Jianwen Sun

    2012-02-01

    Full Text Available Due to the ubiquitous nature and anonymity abuses in cyberspace, it’s difficult to make criminal identity tracing in cybercrime investigation. Writeprint identification offers a valuable tool to counter anonymity by applying stylometric analysis technique to help identify individuals based on textual traces. In this study, a framework for online writeprint identification is proposed. Variable length character n-gram is used to represent the author’s writing style. The technique of IG seeded GA based feature selection for Ensemble (IGAE is also developed to build an identification model based on individual author level features. Several specific components for dealing with the individual feature set are integrated to improve the performance. The proposed feature and technique are evaluated on a real world data set encompassing reviews posted by 50 Amazon customers. The experimental results show the effectiveness of the proposed framework, with accuracy over 94% for 20 authors and over 80% for 50 ones. Compared with the baseline technique (Support Vector Machine, a higher performance is achieved by using IGAE, resulting in a 2% and 8% improvement over SVM for 20 and 50 authors respectively. Moreover, it has been shown that IGAE is more scalable in terms of the number of authors, than author group level based methods.

  12. Applied genre analysis: a multi-perspective model

    Directory of Open Access Journals (Sweden)

    Vijay K Bhatia

    2002-04-01

    Full Text Available Genre analysis can be viewed from two different perspectives: it may be seen as a reflection of the complex realities of the world of institutionalised communication, or it may be seen as a pedagogically effective and convenient tool for the design of language teaching programmes, often situated within simulated contexts of classroom activities. This paper makes an attempt to understand and resolve the tension between these two seemingly contentious perspectives to answer the question: "Is generic description a reflection of reality, or a convenient fiction invented by applied linguists?". The paper also discusses issues related to the nature and use of linguistic description in a genre-based educational enterprise, claiming that instead of using generic descriptions as models for linguistic reproduction of conventional forms to respond to recurring social contexts, as is often the case in many communication based curriculum contexts, they can be used as analytical resource to understand and manipulate complex inter-generic and multicultural realisations of professional discourse, which will enable learners to use generic knowledge to respond to novel social contexts and also to create new forms of discourse to achieve pragmatic success as well as other powerful human agendas.

  13. Applying DNA computation to intractable problems in social network analysis.

    Science.gov (United States)

    Chen, Rick C S; Yang, Stephen J H

    2010-09-01

    From ancient times to the present day, social networks have played an important role in the formation of various organizations for a range of social behaviors. As such, social networks inherently describe the complicated relationships between elements around the world. Based on mathematical graph theory, social network analysis (SNA) has been developed in and applied to various fields such as Web 2.0 for Web applications and product developments in industries, etc. However, some definitions of SNA, such as finding a clique, N-clique, N-clan, N-club and K-plex, are NP-complete problems, which are not easily solved via traditional computer architecture. These challenges have restricted the uses of SNA. This paper provides DNA-computing-based approaches with inherently high information density and massive parallelism. Using these approaches, we aim to solve the three primary problems of social networks: N-clique, N-clan, and N-club. Their accuracy and feasible time complexities discussed in the paper will demonstrate that DNA computing can be used to facilitate the development of SNA.

  14. Applying importance-performance analysis to patient safety culture.

    Science.gov (United States)

    Lee, Yii-Ching; Wu, Hsin-Hung; Hsieh, Wan-Lin; Weng, Shao-Jen; Hsieh, Liang-Po; Huang, Chih-Hsuan

    2015-01-01

    The Sexton et al.'s (2006) safety attitudes questionnaire (SAQ) has been widely used to assess staff's attitudes towards patient safety in healthcare organizations. However, to date there have been few studies that discuss the perceptions of patient safety both from hospital staff and upper management. The purpose of this paper is to improve and to develop better strategies regarding patient safety in healthcare organizations. The Chinese version of SAQ based on the Taiwan Joint Commission on Hospital Accreditation is used to evaluate the perceptions of hospital staff. The current study then lies in applying importance-performance analysis technique to identify the major strengths and weaknesses of the safety culture. The results show that teamwork climate, safety climate, job satisfaction, stress recognition and working conditions are major strengths and should be maintained in order to provide a better patient safety culture. On the contrary, perceptions of management and hospital handoffs and transitions are important weaknesses and should be improved immediately. Research limitations/implications - The research is restricted in generalizability. The assessment of hospital staff in patient safety culture is physicians and registered nurses. It would be interesting to further evaluate other staff's (e.g. technicians, pharmacists and others) opinions regarding patient safety culture in the hospital. Few studies have clearly evaluated the perceptions of healthcare organization management regarding patient safety culture. Healthcare managers enable to take more effective actions to improve the level of patient safety by investigating key characteristics (either strengths or weaknesses) that healthcare organizations should focus on.

  15. Sensitivity analysis of a two-dimensional quantitative microbiological risk assessment: keeping variability and uncertainty separated.

    Science.gov (United States)

    Busschaert, Pieter; Geeraerd, Annemie H; Uyttendaele, Mieke; Van Impe, Jan F

    2011-08-01

    The aim of quantitative microbiological risk assessment is to estimate the risk of illness caused by the presence of a pathogen in a food type, and to study the impact of interventions. Because of inherent variability and uncertainty, risk assessments are generally conducted stochastically, and if possible it is advised to characterize variability separately from uncertainty. Sensitivity analysis allows to indicate to which of the input variables the outcome of a quantitative microbiological risk assessment is most sensitive. Although a number of methods exist to apply sensitivity analysis to a risk assessment with probabilistic input variables (such as contamination, storage temperature, storage duration, etc.), it is challenging to perform sensitivity analysis in the case where a risk assessment includes a separate characterization of variability and uncertainty of input variables. A procedure is proposed that focuses on the relation between risk estimates obtained by Monte Carlo simulation and the location of pseudo-randomly sampled input variables within the uncertainty and variability distributions. Within this procedure, two methods are used-that is, an ANOVA-like model and Sobol sensitivity indices-to obtain and compare the impact of variability and of uncertainty of all input variables, and of model uncertainty and scenario uncertainty. As a case study, this methodology is applied to a risk assessment to estimate the risk of contracting listeriosis due to consumption of deli meats.

  16. Handbook of Systems Analysis: Volume 1. Overview. Chapter 2. The Genesis of Applied Systems Analysis

    OpenAIRE

    1981-01-01

    The International Institute for Applied Systems Analysis is preparing a Handbook of Systems Analysis, which will appear in three volumes: Volume 1: Overview is aimed at a widely varied audience of producers and users of systems analysis studies. Volume 2: Methods is aimed at systems analysts and other members of systems analysis teams who need basic knowledge of methods in which they are not expert; this volume contains introductory overviews of such methods. Volume 3: Cases co...

  17. A comprehensive sensitivity analysis of central-loop MRS data

    Science.gov (United States)

    Behroozmand, Ahmad; Auken, Esben; Dalgaard, Esben; Rejkjaer, Simon

    2014-05-01

    sensitivity kernels of different separated-loop MRS soundings are studied and compared with that of the conventional coincident-loop sounding. As a result, an optimal S/N is achieved using central-loop configuration. 2) The posterior parameter determination of central-loop MRS data (both synthetic and field examples) is studied in a joint MRS and TEM data analysis scheme. In the typical 1D earth parametrization, a complete MRS measurement forms the 1D MRS sensitivity kernels as a function of depth and pulse moment Q; here referred to as a 1D kernel structure. For the conventional coincident-loop configuration, the 1D kernel structure covers the excited earth's volume throughout the applied Qs. As a result, the shallower parts of the subsurface are mainly sampled using smaller Qs and the deeper parts are mainly sampled using higher Qs. For central-loop configuration, however, the 1D kernel structure represents a superior behavior compared to coincident loop configuration. The results of this study highlight advantages of central-loop MRS data and suggest that it can be beneficial to develop MRS instrumentation where the receiver system is separated from the transmitter system.

  18. Sensitivity analysis in a Lassa fever deterministic mathematical model

    Science.gov (United States)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  19. Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis

    Science.gov (United States)

    Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.

    2007-01-01

    To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

  20. Sensitivity analysis for reliable design verification of nuclear turbosets

    Energy Technology Data Exchange (ETDEWEB)

    Zentner, Irmela, E-mail: irmela.zentner@edf.f [Lamsid-Laboratory for Mechanics of Aging Industrial Structures, UMR CNRS/EDF, 1, avenue Du General de Gaulle, 92141 Clamart (France); EDF R and D-Structural Mechanics and Acoustics Department, 1, avenue Du General de Gaulle, 92141 Clamart (France); Tarantola, Stefano [Joint Research Centre of the European Commission-Institute for Protection and Security of the Citizen, T.P. 361, 21027 Ispra (Italy); Rocquigny, E. de [Ecole Centrale Paris-Applied Mathematics and Systems Department (MAS), Grande Voie des Vignes, 92 295 Chatenay-Malabry (France)

    2011-03-15

    In this paper, we present an application of sensitivity analysis for design verification of nuclear turbosets. Before the acquisition of a turbogenerator, energy power operators perform independent design assessment in order to assure safe operating conditions of the new machine in its environment. Variables of interest are related to the vibration behaviour of the machine: its eigenfrequencies and dynamic sensitivity to unbalance. In the framework of design verification, epistemic uncertainties are preponderant. This lack of knowledge is due to inexistent or imprecise information about the design as well as to interaction of the rotating machinery with supporting and sub-structures. Sensitivity analysis enables the analyst to rank sources of uncertainty with respect to their importance and, possibly, to screen out insignificant sources of uncertainty. Further studies, if necessary, can then focus on predominant parameters. In particular, the constructor can be asked for detailed information only about the most significant parameters.

  1. Applied genomics: data mining reveals species-specific malaria diagnostic targets more sensitive than 18S rRNA.

    Science.gov (United States)

    Demas, Allison; Oberstaller, Jenna; DeBarry, Jeremy; Lucchi, Naomi W; Srinivasamoorthy, Ganesh; Sumari, Deborah; Kabanywanyi, Abdunoor M; Villegas, Leopoldo; Escalante, Ananias A; Kachur, S Patrick; Barnwell, John W; Peterson, David S; Udhayakumar, Venkatachalam; Kissinger, Jessica C

    2011-07-01

    Accurate and rapid diagnosis of malaria infections is crucial for implementing species-appropriate treatment and saving lives. Molecular diagnostic tools are the most accurate and sensitive method of detecting Plasmodium, differentiating between Plasmodium species, and detecting subclinical infections. Despite available whole-genome sequence data for Plasmodium falciparum and P. vivax, the majority of PCR-based methods still rely on the 18S rRNA gene targets. Historically, this gene has served as the best target for diagnostic assays. However, it is limited in its ability to detect mixed infections in multiplex assay platforms without the use of nested PCR. New diagnostic targets are needed. Ideal targets will be species specific, highly sensitive, and amenable to both single-step and multiplex PCRs. We have mined the genomes of P. falciparum and P. vivax to identify species-specific, repetitive sequences that serve as new PCR targets for the detection of malaria. We show that these targets (Pvr47 and Pfr364) exist in 14 to 41 copies and are more sensitive than 18S rRNA when utilized in a single-step PCR. Parasites are routinely detected at levels of 1 to 10 parasites/μl. The reaction can be multiplexed to detect both species in a single reaction. We have examined 7 P. falciparum strains and 91 P. falciparum clinical isolates from Tanzania and 10 P. vivax strains and 96 P. vivax clinical isolates from Venezuela, and we have verified a sensitivity and specificity of ∼100% for both targets compared with a nested 18S rRNA approach. We show that bioinformatics approaches can be successfully applied to identify novel diagnostic targets and improve molecular methods for pathogen detection. These novel targets provide a powerful alternative molecular diagnostic method for the detection of P. falciparum and P. vivax in conventional or multiplex PCR platforms.

  2. Automated Sensitivity Analysis of Interplanetary Trajectories for Optimal Mission Design

    Science.gov (United States)

    Knittel, Jeremy; Hughes, Kyle; Englander, Jacob; Sarli, Bruno

    2017-01-01

    This work describes a suite of Python tools known as the Python EMTG Automated Trade Study Application (PEATSA). PEATSA was written to automate the operation of trajectory optimization software, simplify the process of performing sensitivity analysis, and was ultimately found to out-perform a human trajectory designer in unexpected ways. These benefits will be discussed and demonstrated on sample mission designs.

  3. Lower extremity angle measurement with accelerometers - error and sensitivity analysis

    NARCIS (Netherlands)

    Willemsen, Antoon Th.M.; Frigo, Carlo; Boom, Herman B.K.

    1991-01-01

    The use of accelerometers for angle assessment of the lower extremities is investigated. This method is evaluated by an error-and-sensitivity analysis using healthy subject data. Of three potential error sources (the reference system, the accelerometers, and the model assumptions) the last is found

  4. Omitted Variable Sensitivity Analysis with the Annotated Love Plot

    Science.gov (United States)

    Hansen, Ben B.; Fredrickson, Mark M.

    2014-01-01

    The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…

  5. Carbon dioxide capture processes: Simulation, design and sensitivity analysis

    DEFF Research Database (Denmark)

    Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul

    2012-01-01

    Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...

  6. Determination of temperature of moving surface by sensitivity analysis

    CERN Document Server

    Farhanieh, B

    2002-01-01

    In this paper sensitivity analysis in inverse problem solutions is employed to estimate the temperature of a moving surface. Moving finite element method is used for spatial discretization. Time derivatives are approximated using Crank-Nicklson method. The accuracy of the solution is assessed by simulation method. The convergence domain is investigated for the determination of the temperature of a solid fuel.

  7. Sensitivity analysis on parameters and processes affecting vapor intrusion risk

    NARCIS (Netherlands)

    Picone, S.; Valstar, J.R.; Gaans, van P.; Grotenhuis, J.T.C.; Rijnaarts, H.H.M.

    2012-01-01

    A one-dimensional numerical model was developed and used to identify the key processes controlling vapor intrusion risks by means of a sensitivity analysis. The model simulates the fate of a dissolved volatile organic compound present below the ventilated crawl space of a house. In contrast to the v

  8. Detecting tipping points in ecological models with sensitivity analysis

    NARCIS (Netherlands)

    Broeke, G.A. ten; Voorn, van G.A.K.; Kooi, B.W.; Molenaar, J.

    2016-01-01

    Simulation models are commonly used to understand and predict the developmentof ecological systems, for instance to study the occurrence of tipping points and their possibleecological effects. Sensitivity analysis is a key tool in the study of model responses to change s in conditions. The applicabi

  9. Methods for global sensitivity analysis in life cycle assessment

    NARCIS (Netherlands)

    Groen, Evelyne A.; Bokkers, Eddy; Heijungs, Reinout; Boer, de Imke J.M.

    2017-01-01

    Purpose: Input parameters required to quantify environmental impact in life cycle assessment (LCA) can be uncertain due to e.g. temporal variability or unknowns about the true value of emission factors. Uncertainty of environmental impact can be analysed by means of a global sensitivity analysis to

  10. Design tradeoff studies and sensitivity analysis. Appendix B

    Energy Technology Data Exchange (ETDEWEB)

    1979-05-25

    The results of the design trade-off studies and the sensitivity analysis of Phase I of the Near Term Hybrid Vehicle (NTHV) Program are presented. The effects of variations in the design of the vehicle body, propulsion systems, and other components on vehicle power, weight, cost, and fuel economy and an optimized hybrid vehicle design are discussed. (LCL)

  11. Clinical usefulness of the clock drawing test applying rasch analysis in predicting of cognitive impairment.

    Science.gov (United States)

    Yoo, Doo Han; Lee, Jae Shin

    2016-07-01

    [Purpose] This study examined the clinical usefulness of the clock drawing test applying Rasch analysis for predicting the level of cognitive impairment. [Subjects and Methods] A total of 187 stroke patients with cognitive impairment were enrolled in this study. The 187 patients were evaluated by the clock drawing test developed through Rasch analysis along with the mini-mental state examination of cognitive evaluation tool. An analysis of the variance was performed to examine the significance of the mini-mental state examination and the clock drawing test according to the general characteristics of the subjects. Receiver operating characteristic analysis was performed to determine the cutoff point for cognitive impairment and to calculate the sensitivity and specificity values. [Results] The results of comparison of the clock drawing test with the mini-mental state showed significant differences in according to gender, age, education, and affected side. A total CDT of 10.5, which was selected as the cutoff point to identify cognitive impairement, showed a sensitivity, specificity, Youden index, positive predictive, and negative predicive values of 86.4%, 91.5%, 0.8, 95%, and 88.2%. [Conclusion] The clock drawing test is believed to be useful in assessments and interventions based on its excellent ability to identify cognitive disorders.

  12. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling: GEOSTATISTICAL SENSITIVITY ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Song, Xuehang [Pacific Northwest National Laboratory, Richland Washington USA; Zachara, John M. [Pacific Northwest National Laboratory, Richland Washington USA

    2017-05-01

    Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level of the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.

  13. Integrative "omic" analysis for tamoxifen sensitivity through cell based models.

    Directory of Open Access Journals (Sweden)

    Liming Weng

    Full Text Available It has long been observed that tamoxifen sensitivity varies among breast cancer patients. Further, ethnic differences of tamoxifen therapy between Caucasian and African American have also been reported. Since most studies have been focused on Caucasian people, we sought to comprehensively evaluate genetic variants related to tamoxifen therapy in African-derived samples. An integrative "omic" approach developed by our group was used to investigate relationships among endoxifen (an active metabolite of tamoxifen sensitivity, SNP genotype, mRNA and microRNA expressions in 58 HapMap YRI lymphoblastoid cell lines. We identified 50 SNPs that associate with cellular sensitivity to endoxifen through their effects on 34 genes and 30 microRNA expression. Some of these findings are shared in both Caucasian and African samples, while others are unique in the African samples. Among gene/microRNA that were identified in both ethnic groups, the expression of TRAF1 is also correlated with tamoxifen sensitivity in a collection of 44 breast cancer cell lines. Further, knock-down TRAF1 and over-expression of hsa-let-7i confirmed the roles of hsa-let-7i and TRAF1 in increasing tamoxifen sensitivity in the ZR-75-1 breast cancer cell line. Our integrative omic analysis facilitated the discovery of pharmacogenomic biomarkers that potentially affect tamoxifen sensitivity.

  14. Differentially Private Data Analysis of Social Networks via Restricted Sensitivity

    CERN Document Server

    Blocki, Jeremiah; Datta, Anupam; Sheffet, Or

    2012-01-01

    We introduce the notion of restricted sensitivity as an alternative to global and smooth sensitivity to improve accuracy in differentially private data analysis. The definition of restricted sensitivity is similar to that of global sensitivity except that instead of quantifying over all possible datasets, we take advantage of any beliefs about the dataset that a querier may have, to quantify over a restricted class of datasets. Specifically, given a query f and a hypothesis H about the structure of a dataset D, we show generically how to transform f into a new query f_H whose global sensitivity (over all datasets including those that do not satisfy H) matches the restricted sensitivity of the query f. Moreover, if the belief of the querier is correct (i.e., D is in H) then f_H(D) = f(D). If the belief is incorrect, then f_H(D) may be inaccurate. We demonstrate the usefulness of this notion by considering the task of answering queries regarding social-networks, which we model as a combination of a graph and a ...

  15. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    Science.gov (United States)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity

  16. Using a variance-based sensitivity analysis for analyzing the relation between measurements and unknown parameters of a physical model

    Science.gov (United States)

    Zhao, J.; Tiede, C.

    2011-05-01

    An implementation of uncertainty analysis (UA) and quantitative global sensitivity analysis (SA) is applied to the non-linear inversion of gravity changes and three-dimensional displacement data which were measured in and active volcanic area. A didactic example is included to illustrate the computational procedure. The main emphasis is placed on the problem of extended Fourier amplitude sensitivity test (E-FAST). This method produces the total sensitivity indices (TSIs), so that all interactions between the unknown input parameters are taken into account. The possible correlations between the output an the input parameters can be evaluated by uncertainty analysis. Uncertainty analysis results indicate the general fit between the physical model and the measurements. Results of the sensitivity analysis show quite different sensitivities for the measured changes as they relate to the unknown parameters of a physical model for an elastic-gravitational source. Assuming a fixed number of executions, thirty different seeds are observed to determine the stability of this method.

  17. Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly

    Science.gov (United States)

    Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.

    2014-04-01

    We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.

  18. Application of Sensitivity Analysis in Design of Sustainable Buildings

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik; Hesselholt, Allan Tind

    2007-01-01

    satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...... to identify the most important parameters in relation to building performance and to focus design and optimization of sustainable buildings on these fewer, but most important parameters. The sensitivity analyses will typically be performed at a reasonably early stage of the building design process, where...

  19. Rethinking Sensitivity Analysis of Nuclear Simulations with Topology

    Energy Technology Data Exchange (ETDEWEB)

    Dan Maljovec; Bei Wang; Paul Rosen; Andrea Alfonsi; Giovanni Pastore; Cristian Rabiti; Valerio Pascucci

    2016-01-01

    In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of the sensitivity information -- inherently lacking in visual encodings -- offering limited effectiveness in communicating and exploring the generated data. In this paper, we design a framework for sensitivity analysis and visualization of multidimensional nuclear simulation data using partition-based, topology-inspired regression models and report on its efficacy. We rely on the established Morse-Smale regression technique, which allows us to partition the domain into monotonic regions where easily interpretable linear models can be used to assess the influence of inputs on the output variability. The underlying computation is augmented with an intuitive and interactive visual design to effectively communicate sensitivity information to the nuclear scientists. Our framework is being deployed into the multi-purpose probabilistic risk assessment and uncertainty quantification framework RAVEN (Reactor Analysis and Virtual Control Environment). We evaluate our framework using an simulation dataset studying nuclear fuel performance.

  20. Can Artificial Neural Networks be Applied in Seismic Predicition? Preliminary Analysis Applying Radial Topology. Case: Mexico

    CERN Document Server

    Mota-Hernandez, Cinthya; Alvarado-Corona, Rafael

    2014-01-01

    Tectonic earthquakes of high magnitude can cause considerable losses in terms of human lives, economic and infrastructure, among others. According to an evaluation published by the U.S. Geological Survey, 30 is the number of earthquakes which have greatly impacted Mexico from the end of the XIX century to this one. Based upon data from the National Seismological Service, on the period between January 1, 2006 and May 1, 2013 there have occurred 5,826 earthquakes which magnitude has been greater than 4.0 degrees on the Richter magnitude scale (25.54% of the total of earthquakes registered on the national territory), being the Pacific Plate and the Cocos Plate the most important ones. This document describes the development of an Artificial Neural Network (ANN) based on the radial topology which seeks to generate a prediction with an error margin lower than 20% which can inform about the probability of a future earthquake one of the main questions is: can artificial neural networks be applied in seismic forecast...

  1. New trends in applied harmonic analysis sparse representations, compressed sensing, and multifractal analysis

    CERN Document Server

    Cabrelli, Carlos; Jaffard, Stephane; Molter, Ursula

    2016-01-01

    This volume is a selection of written notes corresponding to courses taught at the CIMPA School: "New Trends in Applied Harmonic Analysis: Sparse Representations, Compressed Sensing and Multifractal Analysis". New interactions between harmonic analysis and signal and image processing have seen striking development in the last 10 years, and several technological deadlocks have been solved through the resolution of deep theoretical problems in harmonic analysis. New Trends in Applied Harmonic Analysis focuses on two particularly active areas that are representative of such advances: multifractal analysis, and sparse representation and compressed sensing. The contributions are written by leaders in these areas, and covers both theoretical aspects and applications. This work should prove useful not only to PhD students and postdocs in mathematics and signal and image processing, but also to researchers working in related topics.

  2. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  3. Sensitivity analysis of distributed parameter elements In high-speed circuit networks

    Institute of Scientific and Technical Information of China (English)

    Lei DOU; Zhiquan WANG

    2007-01-01

    This paper presents an analysis method,based on MacCormack's technique,for the evaluation of the time domain sensitivity of distributed parameter elements in high-speed circuit networks.Sensitivities can be calculated from electrical and physical parameters of the distributed parameter elements.The proposed method is a direct numerical method of time-space discretization and does not require complicated mathematical deductive process.Therefore,it is very convenient to program this method.It can be applied to sensitivity analysis of general transmission lines in linear or nonlinear circuit networks.The proposed method is second-order-accurate.Numerical experiment is presented to demonstrate its accuracy and efficiency.

  4. High order sensitivity analysis of complex, coupled systems

    Science.gov (United States)

    Sobieszczanski-Sobieski, Jaroslaw

    1990-01-01

    The Sobieszczanski-Sobieski (1988) algorithm is extended to include second- and higher-order derivatives while retaining the obviation of finite-differencing of the system analysis. This is accomplished by means of a recursive application of the same implicit function theorem as in the original algorithm. In optimization, the computational cost of the higher-order derivatives is relative to the aggregate cost of analysis together with a repetition of the first-order sensitivity analysis as often as is required to produce the equivalent information by successive linearizations within move limits.

  5. Global analysis of sensitivity of bioretention cell design elements to hydrologic performance

    OpenAIRE

    Sun, Yan-Wei; Wei, Xiao-Mei; Christine A. POMEROY

    2011-01-01

    Analysis of sensitivity of bioretention cell design elements to their hydrologic performances is meaningful in offering theoretical guidelines for proper design. Hydrologic performance of bioretention cells was facilitated with consideration of four metrics: the overflow ratio, groundwater recharge ratio, ponding time, and runoff coefficients. The storm water management model (SWMM) and the bioretention infiltration model RECARGA were applied to generating runoff and outflow time series for c...

  6. Spatial sensitivity analysis of snow cover data in a distributed rainfall-runoff model

    Science.gov (United States)

    Berezowski, T.; Nossent, J.; Chormański, J.; Batelaan, O.

    2015-04-01

    As the availability of spatially distributed data sets for distributed rainfall-runoff modelling is strongly increasing, more attention should be paid to the influence of the quality of the data on the calibration. While a lot of progress has been made on using distributed data in simulations of hydrological models, sensitivity of spatial data with respect to model results is not well understood. In this paper we develop a spatial sensitivity analysis method for spatial input data (snow cover fraction - SCF) for a distributed rainfall-runoff model to investigate when the model is differently subjected to SCF uncertainty in different zones of the model. The analysis was focussed on the relation between the SCF sensitivity and the physical and spatial parameters and processes of a distributed rainfall-runoff model. The methodology is tested for the Biebrza River catchment, Poland, for which a distributed WetSpa model is set up to simulate 2 years of daily runoff. The sensitivity analysis uses the Latin-Hypercube One-factor-At-a-Time (LH-OAT) algorithm, which employs different response functions for each spatial parameter representing a 4 × 4 km snow zone. The results show that the spatial patterns of sensitivity can be easily interpreted by co-occurrence of different environmental factors such as geomorphology, soil texture, land use, precipitation and temperature. Moreover, the spatial pattern of sensitivity under different response functions is related to different spatial parameters and physical processes. The results clearly show that the LH-OAT algorithm is suitable for our spatial sensitivity analysis approach and that the SCF is spatially sensitive in the WetSpa model. The developed method can be easily applied to other models and other spatial data.

  7. Reliability and Sensitivity Analysis of Cast Iron Water Pipes for Agricultural Food Irrigation

    Directory of Open Access Journals (Sweden)

    Yanling Ni

    2014-07-01

    Full Text Available This study aims to investigate the reliability and sensitivity of cast iron water pipes for agricultural food irrigation. The Monte Carlo simulation method is used for fracture assessment and reliability analysis of cast iron pipes for agricultural food irrigation. Fracture toughness is considered as a limit state function for corrosion affected cast iron pipes. Then the influence of failure mode on the probability of pipe failure has been discussed. Sensitivity analysis also is carried out to show the effect of changing basic parameters on the reliability and life time of the pipe. The analysis results show that the applied methodology can consider different random variables for estimating of life time of the pipe and it can also provide scientific guidance for rehabilitation and maintenance plans for agricultural food irrigation. In addition, the results of the failure and reliability analysis in this study can be useful for designing of more reliable new pipeline systems for agricultural food irrigation.

  8. Quantitative uncertainty and sensitivity analysis of a PWR control rod ejection accident

    Energy Technology Data Exchange (ETDEWEB)

    Pasichnyk, I.; Perin, Y.; Velkov, K. [Gesellschaft flier Anlagen- und Reaktorsicherheit - GRS mbH, Boltzmannstasse 14, 85748 Garching bei Muenchen (Germany)

    2013-07-01

    The paper describes the results of the quantitative Uncertainty and Sensitivity (U/S) Analysis of a Rod Ejection Accident (REA) which is simulated by the coupled system code ATHLET-QUABOX/CUBBOX applying the GRS tool for U/S analysis SUSA/XSUSA. For the present study, a UOX/MOX mixed core loading based on a generic PWR is modeled. A control rod ejection is calculated for two reactor states: Hot Zero Power (HZP) and 30% of nominal power. The worst cases for the rod ejection are determined by steady-state neutronic simulations taking into account the maximum reactivity insertion in the system and the power peaking factor. For the U/S analysis 378 uncertain parameters are identified and quantified (thermal-hydraulic initial and boundary conditions, input parameters and variations of the two-group cross sections). Results for uncertainty and sensitivity analysis are presented for safety important global and local parameters. (authors)

  9. Efficient sensitivity analysis and optimization of a helicopter rotor

    Science.gov (United States)

    Lim, Joon W.; Chopra, Inderjit

    1989-01-01

    Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.

  10. Gene expression analysis identifies global gene dosage sensitivity in cancer

    DEFF Research Database (Denmark)

    Fehrmann, Rudolf S. N.; Karjalainen, Juha M.; Krajewska, Malgorzata;

    2015-01-01

    expression. We reanalyzed 77,840 expression profiles and observed a limited set of 'transcriptional components' that describe well-known biology, explain the vast majority of variation in gene expression and enable us to predict the biological function of genes. On correcting expression profiles...... for these components, we observed that the residual expression levels (in 'functional genomic mRNA' profiling) correlated strongly with copy number. DNA copy number correlated positively with expression levels for 99% of all abundantly expressed human genes, indicating global gene dosage sensitivity. By applying...

  11. Sensitivity Analysis of MEMS Flexure FET with Multiple Gates

    Directory of Open Access Journals (Sweden)

    K.Spandana

    2016-02-01

    Full Text Available This paper deals with the design and modelling of Flexure FET and the FETs are the one of the important fundamental devices in electronic devices.. In this paper we are going analyse one of the MEMS Flexure Gate Field Effect Transistors. Here we will design gate of the FLEXURE FET with different type of materials and with different structure and we made the comparison between all the structures. We apply pull-in voltage to the Gate with respect to the change in the gate voltage the respective displacement of the gate changes which reflect the change in the drain current and sensitivity.

  12. Sensitivity analysis of influencing parameters in cavern stability

    Institute of Scientific and Technical Information of China (English)

    Abolfazl Abdollahipour; Reza Rahmannejad

    2012-01-01

    In order to analyze the stability of the underground rock structures,knowing the sensitivity of geomechanical parameters is important.To investigate the priority of these geomechanical properties in the stability of cavern,a sensitivity analysis has been performed on a single cavern in various rock mass qualities according to RMR using Phase 2.The stability of cavern has been studied by investigating the side wall deformation.Results showed that most sensitive properties are coefficient of lateral stress and modulus of deformation.Also parameters of Hoek-Brown criterion and σc have no sensitivity when cavern is in a perfect elastic state.But in an elasto-plastic state,parameters of Hoek-Brown criterion and σc affect the deformability; such effect becomes more remarkable with increasing plastic area.Other parameters have different sensitivities concerning rock mass quality (RMR).Results have been used to propose the best set of parameters for study on prediction of sidewall displacement.

  13. An overview of the design and analysis of simulation experiments for sensitivity analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2005-01-01

    Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models. This review surveys 'classic' and 'modern' designs for experiments with simulation models. Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc. These designs

  14. Global sensitivity analysis approach for input selection and system identification purposes--a new framework for feedforward neural networks.

    Science.gov (United States)

    Fock, Eric

    2014-08-01

    A new algorithm for the selection of input variables of neural network is proposed. This new method, applied after the training stage, ranks the inputs according to their importance in the variance of the model output. The use of a global sensitivity analysis technique, extended Fourier amplitude sensitivity test, gives the total sensitivity index for each variable, which allows for the ranking and the removal of the less relevant inputs. Applied to some benchmarking problems in the field of features selection, the proposed approach shows good agreement in keeping the relevant variables. This new method is a useful tool for removing superfluous inputs and for system identification.

  15. The Methods of Sensitivity Analysis and Their Usage for Analysis of Multicriteria Decision

    Directory of Open Access Journals (Sweden)

    Rūta Simanavičienė

    2011-08-01

    Full Text Available In this paper we describe the application's fields of the sensitivity analysis methods. We pass in review the application of these methods in multiple criteria decision making, when the initial data are numbers. We formulate the problem, which of the sensitivity analysis methods is more effective for the usage in the decision making process.Article in Lithuanian

  16. Empirical modeling and data analysis for engineers and applied scientists

    CERN Document Server

    Pardo, Scott A

    2016-01-01

    This textbook teaches advanced undergraduate and first-year graduate students in Engineering and Applied Sciences to gather and analyze empirical observations (data) in order to aid in making design decisions. While science is about discovery, the primary paradigm of engineering and "applied science" is design. Scientists are in the discovery business and want, in general, to understand the natural world rather than to alter it. In contrast, engineers and applied scientists design products, processes, and solutions to problems. That said, statistics, as a discipline, is mostly oriented toward the discovery paradigm. Young engineers come out of their degree programs having taken courses such as "Statistics for Engineers and Scientists" without any clear idea as to how they can use statistical methods to help them design products or processes. Many seem to think that statistics is only useful for demonstrating that a device or process actually does what it was designed to do. Statistics courses emphasize creati...

  17. Applying thematic analysis theory to practice: a researcher's experience.

    Science.gov (United States)

    Tuckett, Anthony G

    2005-01-01

    This article describes an experience of thematic analysis. In order to answer the question 'What does analysis look like in practice?' it describes in brief how the methodology of grounded theory, the epistemology of social constructionism, and the theoretical stance of symbolic interactionism inform analysis. Additionally, analysis is examined by evidencing the systematic processes--here termed organising, coding, writing, theorising, and reading--that led the researcher to develop a final thematic schema.

  18. Global analysis of sensitivity of bioretention cell design elements to hydrologic performance

    Directory of Open Access Journals (Sweden)

    Yan-wei SUN

    2011-09-01

    Full Text Available Analysis of sensitivity of bioretention cell design elements to their hydrologic performances is meaningful in offering theoretical guidelines for proper design. Hydrologic performance of bioretention cells was facilitated with consideration of four metrics: the overflow ratio, groundwater recharge ratio, ponding time, and runoff coefficients. The storm water management model (SWMM and the bioretention infiltration model RECARGA were applied to generating runoff and outflow time series for calculation of hydrologic performance metrics. Using a parking lot to build a bioretention cell, as an example, the Morris method was used to conduct global sensitivity analysis for two groups of bioretention samples, one without underdrain and the other with underdrain. Results show that the surface area is the most sensitive element to most of the hydrologic metrics, while the gravel depth is the least sensitive element whether bioretention cells are installed with underdrain or not. The saturated infiltration rate of planting soil and the saturated infiltration rate of native soil are the other two most sensitive elements for bioretention cells without underdrain, while the saturated infiltration rate of native soil and underdrain size are the two most sensitive design elements for bioretention cells with underdrain.

  19. Interactive Building Design Space Exploration Using Regionalized Sensitivity Analysis

    DEFF Research Database (Denmark)

    Jensen, Rasmus Lund; Maagaard, Steffen; Østergård, Torben

    2017-01-01

    Monte Carlo simulations combined with regionalized sensitivity analysis provide the means to explore a vast, multivariate design space in building design. Typically, sensitivity analysis shows how the variability of model output relates to the uncertainties in models inputs. This reveals which...... in combination with the interactive parallel coordinate plot (PCP). The latter is an effective tool to explore stochastic simulations and to find high-performing building designs. The proposed methods help decision makers to focus their attention to the most important design parameters when exploring...... a multivariate design space. As case study, we consider building performance simulations of a 15.000 m² educational centre with respect to energy demand, thermal comfort, and daylight....

  20. An analytic method for sensitivity analysis of complex systems

    CERN Document Server

    Zhu, Yueying; Li, Wei; Cai, Xu

    2016-01-01

    Sensitivity analysis is concerned with understanding how the model output depends on uncertainties (variances) in inputs and then identifies which inputs are important in contributing to the prediction imprecision. Uncertainty determination in output is the most crucial step in sensitivity analysis. In the present paper, an analytic expression, which can exactly evaluate the uncertainty in output as a function of the output's derivatives and inputs' central moments, is firstly deduced for general multivariate models with given relationship between output and inputs in terms of Taylor series expansion. A $\\gamma$-order relative uncertainty for output, denoted by $\\mathrm{R^{\\gamma}_v}$, is introduced to quantify the contributions of input uncertainty of different orders. On this basis, it is shown that the widely used approximation considering the first order contribution from the variance of input variable can satisfactorily express the output uncertainty only when the input variance is very small or the inpu...

  1. Objective analysis of the ARM IOP data: method and sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Cedarwall, R; Lin, J L; Xie, S C; Yio, J J; Zhang, M H

    1999-04-01

    Motivated by the need of to obtain accurate objective analysis of field experimental data to force physical parameterizations in numerical models, this paper -first reviews the existing objective analysis methods and interpolation schemes that are used to derive atmospheric wind divergence, vertical velocity, and advective tendencies. Advantages and disadvantages of each method are discussed. It is shown that considerable uncertainties in the analyzed products can result from the use of different analysis schemes and even more from different implementations of a particular scheme. The paper then describes a hybrid approach to combine the strengths of the regular grid method and the line-integral method, together with a variational constraining procedure for the analysis of field experimental data. In addition to the use of upper air data, measurements at the surface and at the top-of-the-atmosphere are used to constrain the upper air analysis to conserve column-integrated mass, water, energy, and momentum. Analyses are shown for measurements taken in the Atmospheric Radiation Measurement Programs (ARM) July 1995 Intensive Observational Period (IOP). Sensitivity experiments are carried out to test the robustness of the analyzed data and to reveal the uncertainties in the analysis. It is shown that the variational constraining process significantly reduces the sensitivity of the final data products.

  2. An Analysis of the Economy Principle Applied in Cyber Language

    Institute of Scientific and Technical Information of China (English)

    肖钰敏

    2015-01-01

    With the development of network technology,cyber language,a new social dialect,is widely used in our life.The author analyzes how the economy principle is applied in cyber language from three aspects—word-formation,syntax and non-linguistic symbol.And the author collects,summarizes and analyzes the relevant language materials to prove the economy principle’s real existence in chat room and the reason why the economy principle is applied widely in cyber space.

  3. A Sensitivity Analysis on Component Reliability from Fatigue Life Computations

    Science.gov (United States)

    1992-02-01

    AD-A247 430 MTL TR 92-5 AD A SENSITIVITY ANALYSIS ON COMPONENT RELIABILITY FROM FATIGUE LIFE COMPUTATIONS DONALD M. NEAL, WILLIAM T. MATTHEWS, MARK G...HAGI OR GHANI NUMBI:H(s) Donald M. Neal, William T. Matthews, Mark G. Vangel, and Trevor Rudalevige 9. PERFORMING ORGANIZATION NAME AND ADDRESS lU...Technical Information Center, Cameron Station, Building 5, 5010 Duke Street, Alexandria, VA 22304-6145 2 ATTN: DTIC-FDAC I MIAC/ CINDAS , Purdue

  4. Geometric Error Analysis in Applied Calculus Problem Solving

    Science.gov (United States)

    Usman, Ahmed Ibrahim

    2017-01-01

    The paper investigates geometric errors students made as they tried to use their basic geometric knowledge in the solution of the Applied Calculus Optimization Problem (ACOP). Inaccuracies related to the drawing of geometric diagrams (visualization skills) and those associated with the application of basic differentiation concepts into ACOP…

  5. An applied general equilibrium model for Dutch agribusiness policy analysis

    NARCIS (Netherlands)

    Peerlings, J.

    1993-01-01

    The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of interest.

  6. Applied Missing Data Analysis. Methodology in the Social Sciences Series

    Science.gov (United States)

    Enders, Craig K.

    2010-01-01

    Walking readers step by step through complex concepts, this book translates missing data techniques into something that applied researchers and graduate students can understand and utilize in their own research. Enders explains the rationale and procedural details for maximum likelihood estimation, Bayesian estimation, multiple imputation, and…

  7. Applied Missing Data Analysis. Methodology in the Social Sciences Series

    Science.gov (United States)

    Enders, Craig K.

    2010-01-01

    Walking readers step by step through complex concepts, this book translates missing data techniques into something that applied researchers and graduate students can understand and utilize in their own research. Enders explains the rationale and procedural details for maximum likelihood estimation, Bayesian estimation, multiple imputation, and…

  8. An applied general equilibrium model for Dutch agribusiness policy analysis

    NARCIS (Netherlands)

    Peerlings, J.

    1993-01-01

    The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of

  9. An applied general equilibrium model for Dutch agribusiness policy analysis.

    NARCIS (Netherlands)

    Peerlings, J.H.M.

    1993-01-01

    The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of interest.The model is fairly

  10. Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy

    Science.gov (United States)

    Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker

    2015-01-01

    The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy. PMID:26283989

  11. Application of a sensitivity analysis technique to high-order digital flight control systems

    Science.gov (United States)

    Paduano, James D.; Downing, David R.

    1987-01-01

    A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.

  12. On the variational data assimilation problem solving and sensitivity analysis

    Science.gov (United States)

    Arcucci, Rossella; D'Amore, Luisa; Pistoia, Jenny; Toumi, Ralf; Murli, Almerico

    2017-04-01

    We consider the Variational Data Assimilation (VarDA) problem in an operational framework, namely, as it results when it is employed for the analysis of temperature and salinity variations of data collected in closed and semi closed seas. We present a computing approach to solve the main computational kernel at the heart of the VarDA problem, which outperforms the technique nowadays employed by the oceanographic operative software. The new approach is obtained by means of Tikhonov regularization. We provide the sensitivity analysis of this approach and we also study its performance in terms of the accuracy gain on the computed solution. We provide validations on two realistic oceanographic data sets.

  13. Computational aspects of sensitivity calculations in transient structural analysis

    Science.gov (United States)

    Greene, William H.; Haftka, Raphael T.

    1988-01-01

    A key step in the application of formal automated design techniques to structures under transient loading is the calculation of sensitivities of response quantities to the design parameters. This paper considers structures with general forms of damping acted on by general transient loading and addresses issues of computational errors and computational efficiency. The equations of motion are reduced using the traditional basis of vibration modes and then integrated using a highly accurate, explicit integration technique. A critical point constraint formulation is used to place constraints on the magnitude of each response quantity as a function of time. Three different techniques for calculating sensitivities of the critical point constraints are presented. The first two are based on the straightforward application of the forward and central difference operators, respectively. The third is based on explicit differentiation of the equations of motion. Condition errors, finite difference truncation errors, and modal convergence errors for the three techniques are compared by applying them to a simple five-span-beam problem. Sensitivity results are presented for two different transient loading conditions and for both damped and undamped cases.

  14. Adjoint based sensitivity analysis of a reacting jet in crossflow

    Science.gov (United States)

    Sashittal, Palash; Sayadi, Taraneh; Schmid, Peter

    2016-11-01

    With current advances in computational resources, high fidelity simulations of reactive flows are increasingly being used as predictive tools in various industrial applications. In order to capture the combustion process accurately, detailed/reduced chemical mechanisms are employed, which in turn rely on various model parameters. Therefore, it would be of great interest to quantify the sensitivities of the predictions with respect to the introduced models. Due to the high dimensionality of the parameter space, methods such as finite differences which rely on multiple forward simulations prove to be very costly and adjoint based techniques are a suitable alternative. The complex nature of the governing equations, however, renders an efficient strategy in finding the adjoint equations a challenging task. In this study, we employ the modular approach of Fosas de Pando et al. (2012), to build a discrete adjoint framework applied to a reacting jet in crossflow. The developed framework is then used to extract the sensitivity of the integrated heat release with respect to the existing combustion parameters. Analyzing the sensitivities in the three-dimensional domain provides insight towards the specific regions of the flow that are more susceptible to the choice of the model.

  15. Superconducting Accelerating Cavity Pressure Sensitivity Analysis and Stiffening

    Energy Technology Data Exchange (ETDEWEB)

    Rodnizki, J [Soreq NRC, Yavne, Israel; Ben Aliz, Y [Soreq NRC, Yavne, Israel; Grin, A [Soreq NRC, Yavne, Israel; Horvitz, Z [Soreq NRC, Yavne, Israel; Perry, A [Soreq NRC, Yavne, Israel; Weissman, L [Soreq NRC, Yavne, Israel; Davis, G Kirk [JLAB; Delayen, Jean R. [Old Dominion Universtiy

    2014-12-01

    The Soreq Applied Research Accelerator Facility (SARAF) design is based on a 40 MeV 5 mA light ions superconducting RF linac. Phase-I of SARAF delivers up to 2 mA CW proton beams in an energy range of 1.5 - 4.0 MeV. The maximum beam power that we have reached is 5.7 kW. Today, the main limiting factor to reach higher ion energy and beam power is related to the HWR sensitivity to the liquid helium coolant pressure fluctuations. The HWR sensitivity to helium pressure is about 60 Hz/mbar. The cavities had been designed, a decade ago, to be soft in order to enable tuning of their novel shape. However, the cavities turned out to be too soft. In this work we found that increasing the rigidity of the cavities in the vicinity of the external drift tubes may reduce the cavity sensitivity by a factor of three. A preliminary design to increase the cavity rigidity is presented.

  16. Probability and sensitivity analysis of machine foundation and soil interaction

    Directory of Open Access Journals (Sweden)

    Králik J., jr.

    2009-06-01

    Full Text Available This paper deals with the possibility of the sensitivity and probabilistic analysis of the reliability of the machine foundation depending on variability of the soil stiffness, structure geometry and compressor operation. The requirements to design of the foundation under rotating machines increased due to development of calculation method and computer tools. During the structural design process, an engineer has to consider problems of the soil-foundation and foundation-machine interaction from the safety, reliability and durability of structure point of view. The advantages and disadvantages of the deterministic and probabilistic analysis of the machine foundation resistance are discussed. The sensitivity of the machine foundation to the uncertainties of the soil properties due to longtime rotating movement of machine is not negligible for design engineers. On the example of compressor foundation and turbine fy. SIEMENS AG the affectivity of the probabilistic design methodology was presented. The Latin Hypercube Sampling (LHS simulation method for the analysis of the compressor foundation reliability was used on program ANSYS. The 200 simulations for five load cases were calculated in the real time on PC. The probabilistic analysis gives us more complex information about the soil-foundation-machine interaction as the deterministic analysis.

  17. System Analysis Applying to Talent Resource Development Research

    Institute of Scientific and Technical Information of China (English)

    WANG Peng-tao; ZHENG Gang

    2001-01-01

    In the development research of talent resource, the most important of talent resource forecast and optimization is the structure of talent resource, requirement number and talent quality. The article establish factor reconstruction analysis forecast and talent quality model on the method: system reconstruction analysis and ensure most effective factor level in system, which is presented by G. J. Klirti, B.Jonesque. And performing dynamic analysis of example ration.

  18. Maternal Sensitivity in Parenting Preterm Children: A Meta-analysis.

    Science.gov (United States)

    Bilgin, Ayten; Wolke, Dieter

    2015-07-01

    Preterm birth is a significant stressor for parents and may adversely impact maternal parenting behavior. However, findings have been inconsistent. The objective of this meta-analysis was to determine whether mothers of preterm children behave differently (eg, less responsive or sensitive) in their interactions with their children after they are discharged from the hospital than mothers of term children. Medline, PsychInfo, ERIC, PubMed, and Web of Science were searched from January 1980 through May 2014 with the following keywords: "premature", "preterm", "low birth weight" in conjunction with "maternal behavio*r", "mother-infant interaction", "maternal sensitivity", and "parenting". Both longitudinal and cross-sectional studies that used an observational measure of maternal parenting behavior were eligible. Study results relating to parenting behaviors defined as sensitivity, facilitation, and responsivity were extracted, and mean estimates were combined with random-effects meta-analysis. Thirty-four studies were included in the meta-analysis. Mothers of preterm and full-term children did not differ significantly from each other in terms of their behavior toward their children (Hedges' g = -0.07; 95% confidence interval: -0.22 to 0.08; z = -0.94; P = .35). The heterogeneity between studies was significant and high (Q = 156.42; I(2) = 78.9, P = .001) and not explained by degree of prematurity, publication date, geographical area, infant age, or type of maternal behavior. Mothers of preterm children were not found to be less sensitive or responsive toward their children than mothers of full-term children. Copyright © 2015 by the American Academy of Pediatrics.

  19. Sensitivity analysis of fine sediment models using heterogeneous data

    Science.gov (United States)

    Kamel, A. M. Yousif; Bhattacharya, B.; El Serafy, G. Y.; van Kessel, T.; Solomatine, D. P.

    2012-04-01

    Sediments play an important role in many aquatic systems. Their transportation and deposition has significant implication on morphology, navigability and water quality. Understanding the dynamics of sediment transportation in time and space is therefore important in drawing interventions and making management decisions. This research is related to the fine sediment dynamics in the Dutch coastal zone, which is subject to human interference through constructions, fishing, navigation, sand mining, etc. These activities do affect the natural flow of sediments and sometimes lead to environmental concerns or affect the siltation rates in harbours and fairways. Numerical models are widely used in studying fine sediment processes. Accuracy of numerical models depends upon the estimation of model parameters through calibration. Studying the model uncertainty related to these parameters is important in improving the spatio-temporal prediction of suspended particulate matter (SPM) concentrations, and determining the limits of their accuracy. This research deals with the analysis of a 3D numerical model of North Sea covering the Dutch coast using the Delft3D modelling tool (developed at Deltares, The Netherlands). The methodology in this research was divided into three main phases. The first phase focused on analysing the performance of the numerical model in simulating SPM concentrations near the Dutch coast by comparing the model predictions with SPM concentrations estimated from NASA's MODIS sensors at different time scales. The second phase focused on carrying out a sensitivity analysis of model parameters. Four model parameters were identified for the uncertainty and sensitivity analysis: the sedimentation velocity, the critical shear stress above which re-suspension occurs, the shields shear stress for re-suspension pick-up, and the re-suspension pick-up factor. By adopting different values of these parameters the numerical model was run and a comparison between the

  20. Analysis of OFDM Applied to Powerline High Speed Digital Communication

    Institute of Scientific and Technical Information of China (English)

    ZHUANG Jian; YANG Gong-xu

    2003-01-01

    The low voltage powerline is becoming a powerful solution to home network, building automation, and internet access as a result of its wide distribution, easy access and little maintenance. The character of powerline channel is very complicated because it is an open net. This article analysed the character of the powerline channel,introduced the basics of OFDM(Orthogonal Frequency Division Multiplexing), and studied the OFDM applied into powerline high speed digital communication.

  1. Sensitivity Analysis for DHRS Heat Exchanger Performance Tests of PGSFR

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Jonggan; Eoh, Jaehyuk; Kim, Dehee; Lee, Taeho; Jeong, Jiyoung [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    The STELLA-1 facility has been constructed and separate effect tests of heat exchangers for DHRS are going to be conducted. Two kinds of heat exchangers including DHX (shell-and-tube sodium-to-sodium heat exchanger) and AHX (helical-tube sodium-to-air heat exchanger) will be tested for design codes V and V. Main test points are a design point and a plant normal operation point of each heat exchanger. Additionally, some plant transient conditions are taken into account for establishing a test condition set. To choose the plant transient test conditions, a sensitivity analysis has been conducted using the design codes for each heat exchanger. The sensitivity of the PGSFR DHRS heat exchanger tests (the DHX and AHX in the STELLA-1 facility) has been analyzed through a parametric study using the design codes SHXSA and AHXSA at the design point and the plant normal operation point. The DHX heat transfer performance was sensitive to the change in the inlet temperature of the shell-side and the AHX heat transfer performance was sensitive to the change in the inlet temperature of the tube side. The results of this work will contribute to an improvement of the test matrix for the separate effect test of each heat exchanger.

  2. Audio spectrum analysis of umbilical artery Doppler ultrasound signals applied to a clinical material.

    Science.gov (United States)

    Thuring, Ann; Brännström, K Jonas; Jansson, Tomas; Maršál, Karel

    2014-12-01

    Analysis of umbilical artery flow velocity waveforms characterized by pulsatility index (PI) is used to evaluate fetoplacental circulation in high-risk pregnancies. However, an experienced sonographer may be able to further differentiate between various timbres of Doppler audio signals. Recently, we have developed a method for objective audio signal characterization; the method has been tested in an animal model. In the present pilot study, the method was for the first time applied to human pregnancies. Doppler umbilical artery velocimetry was performed in 13 preterm fetuses before and after two doses of 12 mg betamethasone. The auditory measure defined by the frequency band where the spectral energy had dropped 15 dB from its maximum level (MAXpeak-15 dB ), increased two days after betamethasone administration (p = 0.001) parallel with a less pronounced decrease in PI (p = 0.04). The new auditory parameter MAXpeak-15 dB reflected the changes more sensitively than the PI did.

  3. Sensitivity analysis of longitudinal cracking on asphalt pavement using MEPDG in permafrost region

    Directory of Open Access Journals (Sweden)

    Chen Zhang

    2015-02-01

    Full Text Available Longitudinal cracking is one of the most important distresses of asphalt pavement in permafrost regions. The sensitivity analysis of design parameters for asphalt pavement can be used to study the influence of every parameter on longitudinal cracking, which can help optimizing the design of the pavement structure. In this study, 20 test sections of Qinghai–Tibet Highway were selected to conduct the sensitivity analysis of longitudinal cracking on material parameter based on Mechanistic-Empirical Pavement Design Guide (MEPDG and single factorial sensitivity analysis method. Some computer aided engineering (CAE simulation techniques, such as the Latin hypercube sampling (LHS technique and the multiple regression analysis are used as auxiliary means. Finally, the sensitivity spectrum of material parameter on longitudinal cracking was established. The result shows the multiple regression analysis can be used to determine the remarkable influence factor more efficiently and to process the qualitative analysis when applying the MEPDG software in sensitivity analysis of longitudinal cracking in permafrost regions. The effect weights of the three parameters on longitudinal cracking in descending order are air void, effective binder content and PG grade. The influence of air void on top layer is bigger than that on middle layer and bottom layer. The influence of effective asphalt content on top layer is bigger than that on middle layer and bottom layer, and the influence of bottom layer is slightly bigger than middle layer. The accumulated value of longitudinal cracking on middle layer and bottom layer in the design life would begin to increase when the design temperature of PG grade increased.

  4. Analysis of Frequency Characteristics and Sensitivity of Compliant Mechanisms

    Institute of Scientific and Technical Information of China (English)

    LIU Shanzeng; DAI Jiansheng; LI Aimin; SUN Zhaopeng; FENG Shizhe; CAO Guohua

    2016-01-01

    Based on a modified pseudo-rigid-body model, the frequency characteristics and sensitivity of the large-deformation compliant mechanism are studied. Firstly, the pseudo-rigid-body model under the static and kinetic conditions is modified to enable the modified pseudo-rigid-body model to be more suitable for the dynamic analysis of the compliant mechanism. Subsequently, based on the modified pseudo-rigid-body model, the dynamic equations of the ordinary compliant four-bar mechanism are established using the analytical mechanics. Finally, in combination with the finite element analysis software ANSYS, the frequency characteristics and sensitivity of the compliant mechanism are analyzed by taking the compliant parallel-guiding mechanism and the compliant bistable mechanism as examples. From the simulation results, the dynamic characteristics of compliant mechanism are relatively sensitive to the structure size, section parameter, and characteristic parameter of material on mechanisms. The results could provide great theoretical significance and application values for the structural optimization of compliant mechanisms, the improvement of their dynamic properties and the expansion of their application range.

  5. An analytic method for sensitivity analysis of complex systems

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping Alexandre; Li, Wei; Cai, Xu

    2017-03-01

    Sensitivity analysis is concerned with understanding how the model output depends on uncertainties (variances) in inputs and identifying which inputs are important in contributing to the prediction imprecision. Uncertainty determination in output is the most crucial step in sensitivity analysis. In the present paper, an analytic expression, which can exactly evaluate the uncertainty in output as a function of the output's derivatives and inputs' central moments, is firstly deduced for general multivariate models with given relationship between output and inputs in terms of Taylor series expansion. A γ-order relative uncertainty for output, denoted by Rvγ, is introduced to quantify the contributions of input uncertainty of different orders. On this basis, it is shown that the widely used approximation considering the first order contribution from the variance of input variable can satisfactorily express the output uncertainty only when the input variance is very small or the input-output function is almost linear. Two applications of the analytic formula are performed to the power grid and economic systems where the sensitivities of both actual power output and Economic Order Quantity models are analyzed. The importance of each input variable in response to the model output is quantified by the analytic formula.

  6. Analysis of frequency characteristics and sensitivity of compliant mechanisms

    Science.gov (United States)

    Liu, Shanzeng; Dai, Jiansheng; Li, Aimin; Sun, Zhaopeng; Feng, Shizhe; Cao, Guohua

    2016-07-01

    Based on a modified pseudo-rigid-body model, the frequency characteristics and sensitivity of the large-deformation compliant mechanism are studied. Firstly, the pseudo-rigid-body model under the static and kinetic conditions is modified to enable the modified pseudo-rigid-body model to be more suitable for the dynamic analysis of the compliant mechanism. Subsequently, based on the modified pseudo-rigid-body model, the dynamic equations of the ordinary compliant four-bar mechanism are established using the analytical mechanics. Finally, in combination with the finite element analysis software ANSYS, the frequency characteristics and sensitivity of the compliant mechanism are analyzed by taking the compliant parallel-guiding mechanism and the compliant bistable mechanism as examples. From the simulation results, the dynamic characteristics of compliant mechanism are relatively sensitive to the structure size, section parameter, and characteristic parameter of material on mechanisms. The results could provide great theoretical significance and application values for the structural optimization of compliant mechanisms, the improvement of their dynamic properties and the expansion of their application range.

  7. Adding value in oil and gas by applying decision analysis methodologies: case history

    Energy Technology Data Exchange (ETDEWEB)

    Marot, Nicolas [Petro Andina Resources Inc., Alberta (Canada); Francese, Gaston [Tandem Decision Solutions, Buenos Aires (Argentina)

    2008-07-01

    Petro Andina Resources Ltd. together with Tandem Decision Solutions developed a strategic long range plan applying decision analysis methodology. The objective was to build a robust and fully integrated strategic plan that accomplishes company growth goals to set the strategic directions for the long range. The stochastic methodology and the Integrated Decision Management (IDM{sup TM}) staged approach allowed the company to visualize the associated value and risk of the different strategies while achieving organizational alignment, clarity of action and confidence in the path forward. A decision team involving jointly PAR representatives and Tandem consultants was established to carry out this four month project. Discovery and framing sessions allow the team to disrupt the status quo, discuss near and far reaching ideas and gather the building blocks from which creative strategic alternatives were developed. A comprehensive stochastic valuation model was developed to assess the potential value of each strategy applying simulation tools, sensitivity analysis tools and contingency planning techniques. Final insights and results have been used to populate the final strategic plan presented to the company board providing confidence to the team, assuring that the work embodies the best available ideas, data and expertise, and that the proposed strategy was ready to be elaborated into an optimized course of action. (author)

  8. Adding value in oil and gas by applying decision analysis methodologies: case history

    Energy Technology Data Exchange (ETDEWEB)

    Marot, Nicolas [Petro Andina Resources Inc., Alberta (Canada); Francese, Gaston [Tandem Decision Solutions, Buenos Aires (Argentina)

    2008-07-01

    Petro Andina Resources Ltd. together with Tandem Decision Solutions developed a strategic long range plan applying decision analysis methodology. The objective was to build a robust and fully integrated strategic plan that accomplishes company growth goals to set the strategic directions for the long range. The stochastic methodology and the Integrated Decision Management (IDM{sup TM}) staged approach allowed the company to visualize the associated value and risk of the different strategies while achieving organizational alignment, clarity of action and confidence in the path forward. A decision team involving jointly PAR representatives and Tandem consultants was established to carry out this four month project. Discovery and framing sessions allow the team to disrupt the status quo, discuss near and far reaching ideas and gather the building blocks from which creative strategic alternatives were developed. A comprehensive stochastic valuation model was developed to assess the potential value of each strategy applying simulation tools, sensitivity analysis tools and contingency planning techniques. Final insights and results have been used to populate the final strategic plan presented to the company board providing confidence to the team, assuring that the work embodies the best available ideas, data and expertise, and that the proposed strategy was ready to be elaborated into an optimized course of action. (author)

  9. Mesoporous nitrogen-doped TiO2 sphere applied for quasi-solid-state dye-sensitized solar cell.

    Science.gov (United States)

    Xiang, Peng; Li, Xiong; Wang, Heng; Liu, Guanghui; Shu, Ting; Zhou, Ziming; Ku, Zhiliang; Rong, Yaoguang; Xu, Mi; Liu, Linfeng; Hu, Min; Yang, Ying; Chen, Wei; Liu, Tongfa; Zhang, Meili; Han, Hongwei

    2011-11-24

    A mesoscopic nitrogen-doped TiO2 sphere has been developed for a quasi-solid-state dye-sensitized solar cell [DSSC]. Compared with the undoped TiO2 sphere, the quasi-solid-state DSSC based on the nitrogen-doped TiO2 sphere shows more excellent photovoltaic performance. The photoelectrochemistry of electrodes based on nitrogen-doped and undoped TiO2 spheres was characterized with Mott-Schottky analysis, intensity modulated photocurrent spectroscopy, and electrochemical impedance spectroscopy, which indicated that both the quasi-Fermi level and the charge transport of the photoelectrode were improved after being doped with nitrogen. As a result, a photoelectric conversion efficiency of 6.01% was obtained for the quasi-solid-state DSSC.

  10. Sensitivity and Uncertainty Analysis of Regional Marine Ecosystem Services Value

    Institute of Scientific and Technical Information of China (English)

    SHI Honghua; ZHENG Wei; WANG Zongling; DING Dewen

    2009-01-01

    Marine ecosystem services are the benefits which people obtain from the marine ecosystem, including provisioning ser-vices, regulating services, cultural services and supporting services. The human species, while buffered against environmental changes by culture and technology, is fundamentally dependent on the flow of ecosystem services. Marine ecosystem services be-come increasingly valuable as the terrestrial resources become scarce. The value of marine ecosystem services is the monetary flow of ecosystem services on specific temporal and spatial scales, which often changes due to the variation of the goods prices, yields and the status of marine exploitation. Sensitivity analysis is to study the relationship between the value of marine ecosystem services and the main factors which affect it. Uncertainty analysis based on varying prices, yields and status of marine exploitation was carried out. Through uncertainty analysis, a more credible value range instead of a fixed value of marine ecosystem services was obtained in this study. Moreover, sensitivity analysis of the marine ecosystem services value revealed the relative importance of different factors.

  11. Multi-criteria decision making: an example of sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Dragan S. Pamučar

    2017-05-01

    Full Text Available This study provides a model for result consistency evaluation of multicriterial decision making (MDM methods and selection of the optimal one. The model is based on the analysis of results of MDM methods, that is, the analysis of changes in rankings of MDM methods that occur as a result of alterations in input parameters. In the recommended model, we examine sensitivity analysis of MDM methods to changes in criteria weight and result consistency of methods to changes in measurement scale and the way in which we formulate criteria. In the final phase of the model, we select the most suitable method to solve the observed problem and the optimal alternative. The model is tested on an example, when the optimal MDM method selection was required in order to determine the location of the logistical center. During the selection process, TOPSIS, COPRAS, VIKOR and ELECTRE methods were considered. VIKOR method demonstrated the biggest stability of rankings and was selected as the most fit method for ranking the locations of the logistical center. Results of the demonstrated analysis indicate sensitivity of standard MDM methods to criteria considered in this work. Therefore, it is necessary, to take into account stability of the considered method during the selection process of the optimal method.

  12. Optimal control and sensitivity analysis of an influenza model with treatment and vaccination.

    Science.gov (United States)

    Tchuenche, J M; Khamis, S A; Agusto, F B; Mpeshe, S C

    2011-03-01

    We formulate and analyze the dynamics of an influenza pandemic model with vaccination and treatment using two preventive scenarios: increase and decrease in vaccine uptake. Due to the seasonality of the influenza pandemic, the dynamics is studied in a finite time interval. We focus primarily on controlling the disease with a possible minimal cost and side effects using control theory which is therefore applied via the Pontryagin's maximum principle, and it is observed that full treatment effort should be given while increasing vaccination at the onset of the outbreak. Next, sensitivity analysis and simulations (using the fourth order Runge-Kutta scheme) are carried out in order to determine the relative importance of different factors responsible for disease transmission and prevalence. The most sensitive parameter of the various reproductive numbers apart from the death rate is the inflow rate, while the proportion of new recruits and the vaccine efficacy are the most sensitive parameters for the endemic equilibrium point.

  13. Application of nonlinear optimization method to sensitivity analysis of numerical model

    Institute of Scientific and Technical Information of China (English)

    XU Hui; MU Mu; LUO Dehai

    2004-01-01

    A nonlinear optimization method is applied to sensitivity analysis of a numerical model. Theoretical analysis and numerical experiments indicate that this method can give not only a quantitative assessment whether the numerical model is able to simulate the observations or not, but also the initial field that yields the optimal simulation. In particular, when the simulation results are apparently satisfactory, and sometimes both model error and initial error are considerably large, the nonlinear optimization method, under some conditions, can identify the error that plays a dominant role.

  14. Factorial kriging analysis applied to geological data from petroleum exploration

    Energy Technology Data Exchange (ETDEWEB)

    Jaquet, O.

    1989-10-01

    A regionalized variable, thickness of the reservoir layer, from a gas field is decomposed by factorial kriging analysis. Maps of the obtained components may be associated with depositional environments that are favorable for petroleum exploration.

  15. Applying Qualitative Hazard Analysis to Support Quantitative Safety Analysis for Proposed Reduced Wake Separation Conops

    Science.gov (United States)

    Shortle, John F.; Allocco, Michael

    2005-01-01

    This paper describes a scenario-driven hazard analysis process to identify, eliminate, and control safety-related risks. Within this process, we develop selective criteria to determine the applicability of applying engineering modeling to hypothesized hazard scenarios. This provides a basis for evaluating and prioritizing the scenarios as candidates for further quantitative analysis. We have applied this methodology to proposed concepts of operations for reduced wake separation for closely spaced parallel runways. For arrivals, the process identified 43 core hazard scenarios. Of these, we classified 12 as appropriate for further quantitative modeling, 24 that should be mitigated through controls, recommendations, and / or procedures (that is, scenarios not appropriate for quantitative modeling), and 7 that have the lowest priority for further analysis.

  16. Applying Galois compliance for data analysis in information systems

    Directory of Open Access Journals (Sweden)

    Kozlov Sergey

    2016-03-01

    Full Text Available The article deals with the data analysis in information systems. The author discloses the possibility of using Galois compliance to identify the characteristics of the information system structure. The author reveals the specificity of the application of Galois compliance for the analysis of information system content with the use of invariants of graph theory. Aspects of introduction of mathematical apparatus of Galois compliance for research of interrelations between elements of the adaptive training information system of individual testing are analyzed.

  17. Toward farm-based policy analysis: concepts applied in Haiti

    OpenAIRE

    Martinez, Juan Carlos; Sain, Gustavo; Yates, Michael

    1991-01-01

    Many policies - on the delivery of inputs or on marketing systems, credit, or extension - influence the potential utilization of new technologies. Through 'farm-based policy analysis' it is possible to use data generated in on-farm research (OFR) to identify policy constraints to the use of new technologies, and to effectively communicate that information to policy makers. This paper describes a tentative framework for farm-based policy analysis and suggests a sequence of five steps for the a...

  18. Applied network security monitoring collection, detection, and analysis

    CERN Document Server

    Sanders, Chris

    2013-01-01

    Applied Network Security Monitoring is the essential guide to becoming an NSM analyst from the ground up. This book takes a fundamental approach to NSM, complete with dozens of real-world examples that teach you the key concepts of NSM. Network security monitoring is based on the principle that prevention eventually fails. In the current threat landscape, no matter how much you try, motivated attackers will eventually find their way into your network. At that point, it is your ability to detect and respond to that intrusion that can be the difference between a small incident and a major di

  19. Signed directed social network analysis applied to group conflict

    DEFF Research Database (Denmark)

    Zheng, Quan; Skillicorn, David; Walther, Olivier

    2015-01-01

    Real-world social networks contain relationships of multiple different types, but this richness is often ignored in graph-theoretic modelling. We show how two recently developed spectral embedding techniques, for directed graphs (relationships are asymmetric) and for signed graphs (relationships...... are both positive and negative), can be combined. This combination is particularly appropriate for intelligence, terrorism, and law enforcement applications. We illustrate by applying the novel embedding technique to datasets describing conflict in North-West Africa, and show how unusual interactions can...

  20. Applied risk analysis to the future Brazilian electricity generation matrix

    Energy Technology Data Exchange (ETDEWEB)

    Maues, Jair; Fernandez, Eloi; Correa, Antonio

    2010-09-15

    This study compares energy conversion systems for the generation of electrical power, with an emphasis on the Brazilian energy matrix. The financial model applied in this comparison is based on the Portfolio Theory, developed by Harry Markowitz. The risk-return ratio related to the electrical generation mix predicted in the National Energy Plan - 2030, published in 2006 by the Brazilian Energy Research Office, is evaluated. The increase of non-traditional renewable energy in this expected electrical generating mix, specifically, residues of sugar cane plantations and wind energy, reduce not only the risk but also the average cost of the kilowatt-hour generated.

  1. Sensitivity analysis of GSI based mechanical characterization of rock mass

    CERN Document Server

    Ván, P

    2012-01-01

    Recently, the rock mechanical and rock engineering designs and calculations are frequently based on Geological Strength Index (GSI) method, because it is the only system that provides a complete set of mechanical properties for design purpose. Both the failure criteria and the deformation moduli of the rock mass can be calculated with GSI based equations, which consists of the disturbance factor, as well. The aim of this paper is the sensitivity analysis of GSI and disturbance factor dependent equations that characterize the mechanical properties of rock masses. The survey of the GSI system is not our purpose. The results show that the rock mass strength calculated by the Hoek-Brown failure criteria and both the Hoek-Diederichs and modified Hoek-Diederichs deformation moduli are highly sensitive to changes of both the GSI and the D factor, hence their exact determination is important for the rock engineering design.

  2. Sensitivity Analysis of Hardwired Parameters in GALE Codes

    Energy Technology Data Exchange (ETDEWEB)

    Geelhood, Kenneth J.; Mitchell, Mark R.; Droppo, James G.

    2008-12-01

    The U.S. Nuclear Regulatory Commission asked Pacific Northwest National Laboratory to provide a data-gathering plan for updating the hardwired data tables and parameters of the Gaseous and Liquid Effluents (GALE) codes to reflect current nuclear reactor performance. This would enable the GALE codes to make more accurate predictions about the normal radioactive release source term applicable to currently operating reactors and to the cohort of reactors planned for construction in the next few years. A sensitivity analysis was conducted to define the importance of hardwired parameters in terms of each parameter’s effect on the emission rate of the nuclides that are most important in computing potential exposures. The results of this study were used to compile a list of parameters that should be updated based on the sensitivity of these parameters to outputs of interest.

  3. Stability and Sensitivity Analysis of Fuzzy Control Systems. Mechatronics Applications

    Directory of Open Access Journals (Sweden)

    Radu-Emil Precup

    2006-01-01

    Full Text Available The development of fuzzy control systems is usually performed by heuristicmeans, incorporating human skills, the drawback being in the lack of general-purposedevelopment methods. A major problem, which follows from this development, is theanalysis of the structural properties of the control system, such as stability, controllabilityand robustness. Here comes the first goal of the paper, to present a stability analysismethod dedicated to fuzzy control systems with mechatronics applications based on the useof Popov’s hyperstability theory. The second goal of this paper is to perform the sensitivityanalysis of fuzzy control systems with respect to the parametric variations of the controlledplant for a class of servo-systems used in mechatronics applications based on theconstruction of sensitivity models. The stability and sensitivity analysis methods provideuseful information to the development of fuzzy control systems. The case studies concerningfuzzy controlled servo-systems, accompanied by digital simulation results and real-timeexperimental results, validate the presented methods.

  4. Multi-wavelength sensitive holographic polymer dispersed liquid crystal grating applied within image splitter for autostereoscopic display

    Science.gov (United States)

    Zheng, Jihong; Wang, Kangni; Gao, Hui; Lu, Feiyue; Sun, Lijia; Zhuang, Songlin

    2016-09-01

    Multi-wavelength sensitive holographic polymer dispersed liquid crystal (H-PDLC) grating and its application within image splitter for autostereoscopic display are reported in this paper. Two initiator systems consisting of photoinitiator, Methylene Blue and coinitiator, p-toluenesulfonic acid as well as photoinitiator, Rose Bengal and coinitiator, Nphenylglycine are employed. We demonstrate that Bragg gratings can be formed in this syrup polymerized under three lasers simultaneously including 632.8nm from He-Ne laser, 532nm from Verdi solid state laser, and 441.6nm from He- Cd laser. The diffraction efficiency of three kinds of gratings with different exposure wavelength are 57%, 75% and 33%, respectively. The threshold driving voltages of those gratings are 2.8, 3.05, and 2.85 V/μm, respectively. We also present the results for the feasibility of this proposed H-PDLC grating applied into image splitter without color dispersion for autostereoscopic display according to experimental splitting effect.

  5. Uncertainty and sensitivity analysis: Mathematical model of coupled heat and mass transfer for a contact baking process

    DEFF Research Database (Denmark)

    Feyissa, Aberham Hailu; Gernaey, Krist; Adler-Nissen, Jens

    2012-01-01

    Similar to other processes, the modelling of heat and mass transfer during food processing involves uncertainty in the values of input parameters (heat and mass transfer coefficients, evaporation rate parameters, thermo-physical properties, initial and boundary conditions) which leads...... to uncertainty in the model predictions. The aim of the current paper is to address this uncertainty challenge in the modelling of food production processes using a combination of uncertainty and sensitivity analysis, where the uncertainty analysis and global sensitivity analysis were applied to a heat and mass...... transfer model of a contact baking process. The Monte Carlo procedure was applied for propagating uncertainty in the input parameters to uncertainty in the model predictions. Monte Carlo simulations and the least squares method were used in the sensitivity analysis: for each model output, a linear...

  6. Biosphere dose conversion Factor Importance and Sensitivity Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-10-15

    This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty.

  7. The analysis of individual Visegrad group members’ agrarian export sensitivity in relation to selected macroeconomic aggregations

    Directory of Open Access Journals (Sweden)

    Miroslav Svatoš

    2011-01-01

    Full Text Available This paper analyzes the development of agricultural trade of the countries of the Visegrad Group with emphasis on development of the value of agricultural exports of the individual countries. The subject matter of the analysis is the sensitivity of the commodity structure of agricultural exports of individual countries and the identification of aggregations that are the least and the most sensitive to changes to the external and internal economic environment. From the conducted research, agricultural trade in the V4 countries was found to have developed very dynamically from 1993 to 2008, while the commodity structure of exports has constantly narrowed as the degree of specialization of the individual countries has increased (this applies especially to the Czech Republic, Slovakia and Hungary. From the results of analysis of sensitivity to changes of selected variables relating to the development of the value of agricultural exports of the individual V4 countries, it appears that the aggregations that react most sensitively to changes are those that are the subject of re-exports, followed by the aggregations that are characterized by a high degree of added value. In general it can be said that products of agricultural primary production exhibit less sensitivity in comparison with grocery industry products. This is confirmed by the general trend arising from the very nature of consumer behaviour.

  8. Development of Ultra-sensitive Laser Spectroscopic Analysis Technology

    Energy Technology Data Exchange (ETDEWEB)

    Cha, H. K.; Kim, D. H.; Song, K. S. (and others)

    2007-04-15

    Laser spectroscopic analysis technology has three distinct merits in detecting various nuclides found in nuclear fields. High selectivity originated from small bandwidth of tunable lasers makes it possible to distinguish various kinds of isotopes and isomers. High intensity of focused laser beam makes it possible to analyze ultratrace amount. Remote delivery of laser beam improves safety of workers who are exposed in dangerous environment. Also it can be applied to remote sensing of environment pollution.

  9. Structured Analysis and Supervision Applied on Heavy Fuel Oil Tanks

    Directory of Open Access Journals (Sweden)

    LAKHOUA Mohamed Najeh

    2016-05-01

    Full Text Available This paper introduces the need for structured analysis and real time (SA-RT method of controlcommand applications in a thermal power plant (TPP using a supervisory control and data acquisition system (SCADA. Then, the architecture of a SCADA system in a TPP is presented. A significant example of a control-command application is presented. It is about the heavy fuel oil tanks of a TPP. Then an application of a structured analysis method, generally used in industry, on the basis of the SA-RT formalism is presented. In fact, different modules are represented and described: Context Diagram, Data Flows Diagram, Control Flows Diagram, State Transition Diagram, Timing Specifications and Requirements Dictionary. Finally, this functional and operational analysis allows us to assist the different steps of the specification, the programming and the configuration of a new tabular in a SCADA system.

  10. Joint regression analysis and AMMI model applied to oat improvement

    Science.gov (United States)

    Oliveira, A.; Oliveira, T. A.; Mejza, S.

    2012-09-01

    In our work we present an application of some biometrical methods useful in genotype stability evaluation, namely AMMI model, Joint Regression Analysis (JRA) and multiple comparison tests. A genotype stability analysis of oat (Avena Sativa L.) grain yield was carried out using data of the Portuguese Plant Breeding Board, sample of the 22 different genotypes during the years 2002, 2003 and 2004 in six locations. In Ferreira et al. (2006) the authors state the relevance of the regression models and of the Additive Main Effects and Multiplicative Interactions (AMMI) model, to study and to estimate phenotypic stability effects. As computational techniques we use the Zigzag algorithm to estimate the regression coefficients and the agricolae-package available in R software for AMMI model analysis.

  11. Applying Content Analysis to Web-based Content

    OpenAIRE

    Kim, Inhwa; Kuljis, Jasna

    2010-01-01

    Using Content Analysis onWeb-based content, in particular the content available onWeb 2.0 sites, is investigated. The relative strengths and limitations of the method are described. To illustrate how content analysis may be used, we provide a brief overview of a case study that investigates cultural impacts on the use of design features with regard to self-disclosure on the blogs of South Korean and United Kingdom’s users. In this study we took a standard approach to conducting the content an...

  12. Systems design analysis applied to launch vehicle configuration

    Science.gov (United States)

    Ryan, R.; Verderaime, V.

    1993-01-01

    As emphasis shifts from optimum-performance aerospace systems to least lift-cycle costs, systems designs must seek, adapt, and innovate cost improvement techniques in design through operations. The systems design process of concept, definition, and design was assessed for the types and flow of total quality management techniques that may be applicable in a launch vehicle systems design analysis. Techniques discussed are task ordering, quality leverage, concurrent engineering, Pareto's principle, robustness, quality function deployment, criteria, and others. These cost oriented techniques are as applicable to aerospace systems design analysis as to any large commercial system.

  13. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis.

    Science.gov (United States)

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-03-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.

  14. Structural dynamic responses analysis applying differential quadrature method

    Institute of Scientific and Technical Information of China (English)

    PU Jun-ping; ZHENG Jian-jun

    2006-01-01

    Unconditionally stable higher-order accurate time step integration algorithms based on the differential quadrature method (DQM) for second-order initial value problems were applied and the quadrature rules of DQM, computing of the weighting coefficients and choices of sampling grid points were discussed. Some numerical examples dealing with the heat transfer problem, the second-order differential equation of imposed vibration of linear single-degree-of-freedom systems and double-degree-of-freedom systems, the nonlinear move differential equation and a beam forced by a changing load were computed,respectively. The results indicated that the algorithm can produce highly accurate solutions with minimal time consumption, and that the system total energy can remain conservative in the numerical computation.

  15. Thermal Analysis Applied to Verapamil Hydrochloride Characterization in Pharmaceutical Formulations

    Directory of Open Access Journals (Sweden)

    Maria Irene Yoshida

    2010-04-01

    Full Text Available Thermogravimetry (TG and differential scanning calorimetry (DSC are useful techniques that have been successfully applied in the pharmaceutical industry to reveal important information regarding the physicochemical properties of drug and excipient molecules such as polymorphism, stability, purity, formulation compatibility among others. Verapamil hydrochloride shows thermal stability up to 180 °C and melts at 146 °C, followed by total degradation. The drug is compatible with all the excipients evaluated. The drug showed degradation when subjected to oxidizing conditions, suggesting that the degradation product is 3,4-dimethoxybenzoic acid derived from alkyl side chain oxidation. Verapamil hydrochloride does not present the phenomenon of polymorphism under the conditions evaluated. Assessing the drug degradation kinetics, the drug had a shelf life (t90 of 56.7 years and a pharmaceutical formulation showed t90 of 6.8 years showing their high stability.

  16. Condition Monitoring of a Process Filter Applying Wireless Vibration Analysis

    Directory of Open Access Journals (Sweden)

    Pekka KOSKELA

    2011-05-01

    Full Text Available This paper presents a novel wireless vibration-based method for monitoring the degree of feed filter clogging. In process industry, these filters are applied to prevent impurities entering the process. During operation, the filters gradually become clogged, decreasing the feed flow and, in the worst case, preventing it. The cleaning of the filter should therefore be carried out predictively in order to avoid equipment damage and unnecessary process downtime. The degree of clogging is estimated by first calculating the time domain indices from low frequency accelerometer samples and then taking the median of the processed values. Nine different statistical quantities are compared based on the estimation accuracy and criteria for operating in resource-constrained environments with particular focus on energy efficiency. The initial results show that the method is able to detect the degree of clogging, and the approach may be applicable to filter clogging monitoring.

  17. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  18. Margin and sensitivity methods for security analysis of electric power systems

    Science.gov (United States)

    Greene, Scott L.

    Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number

  19. Applying an Activity System to Online Collaborative Group Work Analysis

    Science.gov (United States)

    Choi, Hyungshin; Kang, Myunghee

    2010-01-01

    This study determines whether an activity system provides a systematic framework to analyse collaborative group work. Using an activity system as a unit of analysis, the research examined learner behaviours, conflicting factors and facilitating factors while students engaged in collaborative work via asynchronous computer-mediated communication.…

  20. Applying Adult Learning Theory through a Character Analysis

    Science.gov (United States)

    Baskas, Richard S.

    2011-01-01

    The purpose of this study is to analyze the behavior of a character, Celie, in a movie, 'The Color Purple," through the lens of two adult learning theorists to determine the relationships the character has with each theory. The development and portrayal of characters in movies can be explained and understood by the analysis of adult learning…

  1. Applying Skinner's Analysis of Verbal Behavior to Persons with Dementia

    Science.gov (United States)

    Dixon, Mark; Baker, Jonathan C.; Sadowski, Katherine Ann

    2011-01-01

    Skinner's 1957 analysis of verbal behavior has demonstrated a fair amount of utility to teach language to children with autism and other various disorders. However, the learning of language can be forgotten, as is the case for many elderly suffering from dementia or other degenerative diseases. It appears possible that Skinner's operants may…

  2. RISK ANALYSIS APPLIED IN OIL EXPLORATION AND PRODUCTION

    African Journals Online (AJOL)

    ES Obe

    This research investigated the application of risk analysis to Oil exploration and production. Essentially ... uncertainty in Oil field projects; it reduces the impact of the losses should an unfavourable .... own merit but since the company has limited funds it can be ..... ference, New Orleans, LA, September 27-30. (1998). 8. Seba ...

  3. Novel microstructures and technologies applied in chemical analysis techniques

    NARCIS (Netherlands)

    Spiering, Vincent L.; Spiering, V.L.; van der Moolen, Johannes N.; Burger, Gert-Jan; Burger, G.J.; van den Berg, Albert

    1997-01-01

    Novel glass and silicon microstructures and their application in chemical analysis are presented. The micro technologies comprise (deep) dry etching, thin layer growth and anodic bonding. With this combination it is possible to create high resolution electrically isolating silicon dioxide structures

  4. Action, Content and Identity in Applied Genre Analysis for ESP

    Science.gov (United States)

    Flowerdew, John

    2011-01-01

    Genres are staged, structured, communicative events, motivated by various communicative purposes, and performed by members of specific discourse communities (Swales 1990; Bhatia 1993, 2004; Berkenkotter & Huckin 1995). Since its inception, with the two seminal works on the topic by Swales (1990) and Bhatia (1993), genre analysis has taken pride of…

  5. Parameter uncertainty and sensitivity analysis in sediment flux calculation

    Directory of Open Access Journals (Sweden)

    B. Cheviron

    2011-01-01

    Full Text Available This paper examines uncertainties in the calculation of annual sediment budgets at the outlet of rivers. Emphasis is put on the sensitivity of power-law rating curves to degradations of the available discharge-concentration data. The main purpose is to determine how predictions arising from usual or modified power laws resist to the infrequence of concentration data and to relative uncertainties affecting source data. This study identifies cases in which the error on the estimated sediment fluxes remains of the same order of magnitude or even inferior to these in source data, provided the number of concentration data is high enough. The exposed mathematical framework allows considering all limitations at once in further detailed investigations. It is applied here to bound the error on sediment budgets for the major French rivers to the sea.

  6. Applying Cuckoo Search for analysis of LFSR based cryptosystem

    Directory of Open Access Journals (Sweden)

    Maiya Din

    2016-09-01

    Full Text Available Cryptographic techniques are employed for minimizing security hazards to sensitive information. To make the systems more robust, cyphers or crypts being used need to be analysed for which cryptanalysts require ways to automate the process, so that cryptographic systems can be tested more efficiently. Evolutionary algorithms provide one such resort as these are capable of searching global optimal solution very quickly. Cuckoo Search (CS Algorithm has been used effectively in cryptanalysis of conventional systems like Vigenere and Transposition cyphers. Linear Feedback Shift Register (LFSR is a crypto primitive used extensively in design of cryptosystems. In this paper, we analyse LFSR based cryptosystem using Cuckoo Search to find correct initial states of used LFSR. Primitive polynomials of degree 11, 13, 17 and 19 are considered to analyse text crypts of length 200, 300 and 400 characters. Optimal solutions were obtained for the following CS parameters: Levy distribution parameter (β = 1.5 and Alien eggs discovering probability (pa = 0.25.

  7. Path-sensitive analysis for reducing rollback overheads

    Science.gov (United States)

    O'Brien, John K.P.; Wang, Kai-Ting Amy; Yamashita, Mark; Zhuang, Xiaotong

    2014-07-22

    A mechanism is provided for path-sensitive analysis for reducing rollback overheads. The mechanism receives, in a compiler, program code to be compiled to form compiled code. The mechanism divides the code into basic blocks. The mechanism then determines a restore register set for each of the one or more basic blocks to form one or more restore register sets. The mechanism then stores the one or more register sets such that responsive to a rollback during execution of the compiled code. A rollback routine identifies a restore register set from the one or more restore register sets and restores registers identified in the identified restore register set.

  8. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  9. A Spatial Lattice Model Applied for Meteorological Visualization and Analysis

    Directory of Open Access Journals (Sweden)

    Mingyue Lu

    2017-03-01

    Full Text Available Meteorological information has obvious spatial-temporal characteristics. Although it is meaningful to employ a geographic information system (GIS to visualize and analyze the meteorological information for better identification and forecasting of meteorological weather so as to reduce the meteorological disaster loss, modeling meteorological information based on a GIS is still difficult because meteorological elements generally have no stable shape or clear boundary. To date, there are still few GIS models that can satisfy the requirements of both meteorological visualization and analysis. In this article, a spatial lattice model based on sampling particles is proposed to support both the representation and analysis of meteorological information. In this model, a spatial sampling particle is regarded as the basic element that contains the meteorological information, and the location where the particle is placed with the time mark. The location information is generally represented using a point. As these points can be extended to a surface in two dimensions and a voxel in three dimensions, if these surfaces and voxels can occupy a certain space, then this space can be represented using these spatial sampling particles with their point locations and meteorological information. In this case, the full meteorological space can then be represented by arranging numerous particles with their point locations in a certain structure and resolution, i.e., the spatial lattice model, and extended at a higher resolution when necessary. For practical use, the meteorological space is logically classified into three types of spaces, namely the projection surface space, curved surface space, and stereoscopic space, and application-oriented spatial lattice models with different organization forms of spatial sampling particles are designed to support the representation, inquiry, and analysis of meteorological information within the three types of surfaces. Cases

  10. N170 sensitivity to facial expression: A meta-analysis.

    Science.gov (United States)

    Hinojosa, J A; Mercado, F; Carretié, L

    2015-08-01

    The N170 component is the most important electrophysiological index of face processing. Early studies concluded that it was insensitive to facial expression, thus supporting dual theories postulating separate mechanisms for identity and expression encoding. However, recent evidence contradicts this assumption. We conducted a meta-analysis to resolve inconsistencies and to derive theoretical implications. A systematic revision of 128 studies analyzing N170 in response to neutral and emotional expressions yielded 57 meta-analyzable experiments (involving 1645 healthy adults). First, the N170 was found to be sensitive to facial expressions, supporting proposals arguing for integrated rather than segregated mechanisms in the processing of identity and expression. Second, this sensitivity is heterogeneous, with anger, fear and happy faces eliciting the largest N170 amplitudes. Third, we explored some modulatory factors, including the focus of attention - N170 amplitude was found to be also sensitive to unattended expressions - or the reference electrode -common reference reinforcing the effects- . In sum, N170 is a valuable tool to study the neural processing of facial expressions in order to develop current theories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Comparison of sensitivity analysis methods for pollutant degradation modelling: a case study from drinking water treatment.

    Science.gov (United States)

    Neumann, Marc B

    2012-09-01

    Five sensitivity analysis methods based on derivatives, screening, regression, variance decomposition and entropy are introduced, applied and compared for a model predicting micropollutant degradation in drinking water treatment. The sensitivity analysis objectives considered are factors prioritisation (detecting important factors), factors fixing (detecting non-influential factors) and factors mapping (detecting which factors are responsible for causing pollutant limit exceedances). It is shown how the applicability of methods changes in view of increasing interactions between model factors and increasing non-linearity between the model output and the model factors. A high correlation is observed between the indices obtained for the objectives factors prioritisation and factors mapping due to the positive skewness of the probability distributions of the predicted residual pollutant concentrations. The entropy-based method which uses the Kullback-Leibler divergence is found to be particularly suited when assessing pollutant limit exceedances.

  12. LAMQS analysis applied to ancient Egyptian bronze coins

    Energy Technology Data Exchange (ETDEWEB)

    Torrisi, L., E-mail: lorenzo.torrisi@unime.i [Dipartimento di Fisica dell' Universita di Messina, Salita Sperone, 31, 98166 Messina (Italy); Caridi, F.; Giuffrida, L.; Torrisi, A. [Dipartimento di Fisica dell' Universita di Messina, Salita Sperone, 31, 98166 Messina (Italy); Mondio, G.; Serafino, T. [Dipartimento di Fisica della Materia ed Ingegneria Elettronica dell' Universita di Messina, Salita Sperone, 31, 98166 Messina (Italy); Caltabiano, M.; Castrizio, E.D. [Dipartimento di Lettere e Filosofia dell' Universita di Messina, Polo Universitario dell' Annunziata, 98168 Messina (Italy); Paniz, E.; Salici, A. [Carabinieri, Reparto Investigazioni Scientifiche, S.S. 114, Km. 6, 400 Tremestieri, Messina (Italy)

    2010-05-15

    Some Egyptian bronze coins, dated VI-VII sec A.D. are analyzed through different physical techniques in order to compare their composition and morphology and to identify their origin and the type of manufacture. The investigations have been performed by using micro-invasive analysis, such as Laser Ablation and Mass Quadrupole Spectrometry (LAMQS), X-ray Fluorescence (XRF), Laser Induced Breakdown Spectroscopy (LIBS), Electronic (SEM) and Optical Microscopy, Surface Profile Analysis (SPA) and density measurements. Results indicate that the coins have a similar bulk composition but significant differences have been evidenced due to different constituents of the patina, bulk alloy composition, isotopic ratios, density and surface morphology. The results are in agreement with the archaeological expectations, indicating that the coins have been produced in two different Egypt sites: Alexandria and Antinoupolis. A group of fake coins produced in Alexandria in the same historical period is also identified.

  13. Current Human Reliability Analysis Methods Applied to Computerized Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring

    2012-06-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  14. Principles of Micellar Electrokinetic Capillary Chromatography Applied in Pharmaceutical Analysis

    OpenAIRE

    Árpád Gyéresi; Eleonora Mircia; Brigitta Simon; Aura Rusu; Gabriel Hancu

    2013-01-01

    Since its introduction capillary electrophoresis has shown great potential in areas where electrophoretic techniques have rarely been used before, including here the analysis of pharmaceutical substances. The large majority of pharmaceutical substances are neutral from electrophoretic point of view, consequently separations by the classic capillary zone electrophoresis; where separation is based on the differences between the own electrophoretic mobilities of the analytes; are hard to achieve...

  15. Applying Cognitive Work Analysis to Time Critical Targeting Functionality

    Science.gov (United States)

    2004-10-01

    Target List/Dynamic Target Queue (DTL/ DTQ ) in the same place. Figure 4-27 shows the task steps involved in achieving Goal 7. 4- 30 Figure 4-27...GUI WG to brainstorm the order of columns in the DTL/ DTQ Table, a critical component of the TCTF CUI, with successful results, which were...Cognitive Work Analysis DTD Display Task Description DTL/ DTQ Dynamic Target List/Dynamic Target Queue FDO Fighter Duty Officer FEBA Forward Edge

  16. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D.D.; Bailey, G.; Martin, J.; Garton, D.; Noorman, H.; Stelcer, E.; Johnson, P. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1993-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  17. Improving Credit Scorecard Modeling Through Applying Text Analysis

    Directory of Open Access Journals (Sweden)

    Omar Ghailan

    2016-04-01

    Full Text Available In the credit card scoring and loans management, the prediction of the applicant’s future behavior is an important decision support tool and a key factor in reducing the risk of Loan Default. A lot of data mining and classification approaches have been developed for the credit scoring purpose. For the best of our knowledge, building a credit scorecard by analyzing the textual data in the application form has not been explored so far. This paper proposes a comprehensive credit scorecard model technique that improves credit scorecard modeling though employing textual data analysis. This study uses a sample of loan application forms of a financial institution providing loan services in Yemen, which represents a real-world situation of the credit scoring and loan management. The sample contains a set of Arabic textual data attributes defining the applicants. The credit scoring model based on the text mining pre-processing and logistic regression techniques is proposed and evaluated through a comparison with a group of credit scorecard modeling techniques that use only the numeric attributes in the application form. The results show that adding the textual attributes analysis achieves higher classification effectiveness and outperforms the other traditional numerical data analysis techniques.

  18. The colour analysis method applied to homogeneous rocks

    Directory of Open Access Journals (Sweden)

    Halász Amadé

    2015-12-01

    Full Text Available Computer-aided colour analysis can facilitate cyclostratigraphic studies. Here we report on a case study involving the development of a digital colour analysis method for examination of the Boda Claystone Formation which is the most suitable in Hungary for the disposal of high-level radioactive waste. Rock type colours are reddish brown or brownish red, or any shade between brown and red. The method presented here could be used to differentiate similar colours and to identify gradual transitions between these; the latter are of great importance in a cyclostratigraphic analysis of the succession. Geophysical well-logging has demonstrated the existence of characteristic cyclic units, as detected by colour and natural gamma. Based on our research, colour, natural gamma and lithology correlate well. For core Ib-4, these features reveal the presence of orderly cycles with thicknesses of roughly 0.64 to 13 metres. Once the core has been scanned, this is a time- and cost-effective method.

  19. SAFE(R): A Matlab/Octave Toolbox (and R Package) for Global Sensitivity Analysis

    Science.gov (United States)

    Pianosi, Francesca; Sarrazin, Fanny; Gollini, Isabella; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis (GSA) is increasingly used in the development and assessment of hydrological models, as well as for dominant control analysis and for scenario discovery to support water resource management under deep uncertainty. Here we present a toolbox for the application of GSA, called SAFE (Sensitivity Analysis For Everybody) that implements several established GSA methods, including method of Morris, Regional Sensitivity Analysis, variance-based sensitivity Analysis (Sobol') and FAST. It also includes new approaches and visualization tools to complement these established methods. The Toolbox is released in two versions, one running under Matlab/Octave (called SAFE) and one running in R (called SAFER). Thanks to its modular structure, SAFE(R) can be easily integrated with other toolbox and packages, and with models running in a different computing environment. Another interesting feature of SAFE(R) is that all the implemented methods include specific functions for assessing the robustness and convergence of the sensitivity estimates. Furthermore, SAFE(R) includes numerous visualisation tools for the effective investigation and communication of GSA results. The toolbox is designed to make GSA accessible to non-specialist users, and to provide a fully commented code for more experienced users to complement their own tools. The documentation includes a set of workflow scripts with practical guidelines on how to apply GSA and how to use the toolbox. SAFE(R) is open source and freely available from the following website: http://bristol.ac.uk/cabot/resources/safe-toolbox/ Ultimately, SAFE(R) aims at improving the diffusion and quality of GSA practice in the hydrological modelling community.

  20. Sensitivity analysis of the GNSS derived Victoria plate motion

    Science.gov (United States)

    Apolinário, João; Fernandes, Rui; Bos, Machiel

    2014-05-01

    Fernandes et al. (2013) estimated the angular velocity of the Victoria tectonic block from geodetic data (GNSS derived velocities) only.. GNSS observations are sparse in this region and it is therefore of the utmost importance to use the available data (5 sites) in the most optimal way. Unfortunately, the existing time-series were/are affected by missing data and offsets. In addition, some time-series were close to the considered minimal threshold value to compute one reliable velocity solution: 2.5-3.0 years. In this research, we focus on the sensitivity of the derived angular velocity to changes in the data (longer data-span for some stations) by extending the used data-span: Fernandes et al. (2013) used data until September 2011. We also investigate the effect of adding other stations to the solution, which is now possible since more stations became available in the region. In addition, we study if the conventional power-law plus white noise model is indeed the best stochastic model. In this respect, we apply different noise models using HECTOR (Bos et al. (2013), which can use different noise models and estimate offsets and seasonal signals simultaneously. The seasonal signal estimation is also other important parameter, since the time-series are rather short or have large data spans at some stations, which implies that the seasonal signals still can have some effect on the estimated trends as shown by Blewitt and Lavellee (2002) and Bos et al. (2010). We also quantify the magnitude of such differences in the estimation of the secular velocity and their effect in the derived angular velocity. Concerning the offsets, we investigate how they can, detected and undetected, influence the estimated plate motion. The time of offsets has been determined by visual inspection of the time-series. The influence of undetected offsets has been done by adding small synthetic random walk signals that are too small to be detected visually but might have an effect on the

  1. A Multifactorial Analysis of Reconstruction Methods Applied After Total Gastrectomy

    Directory of Open Access Journals (Sweden)

    Oktay Büyükaşık

    2010-12-01

    Full Text Available Aim: The aim of this study was to evaluate the reconstruction methods applied after total gastrectomy in terms of postoperative symptomology and nutrition. Methods: This retrospective study was conducted on 31 patients who underwent total gastrectomy due to gastric cancer in 2. Clinic of General Surgery, SSK Ankara Training Hospital. 6 different reconstruction methods were used and analyzed in terms of age, sex and postoperative complications. One from esophagus and two biopsy specimens from jejunum were taken through upper gastrointestinal endoscopy from all cases, and late period morphological and microbiological changes were examined. Postoperative weight change, dumping symptoms, reflux esophagitis, solid/liquid dysphagia, early satiety, postprandial pain, diarrhea and anorexia were assessed. Results: Of 31 patients,18 were males and 13 females; the youngest one was 33 years old, while the oldest- 69 years old. It was found that reconstruction without pouch was performed in 22 cases and with pouch in 9 cases. Early satiety, postprandial pain, dumping symptoms, diarrhea and anemia were found most commonly in cases with reconstruction without pouch. The rate of bacterial colonization of the jejunal mucosa was identical in both groups. Reflux esophagitis was most commonly seen in omega esophagojejunostomy (EJ, while the least-in Roux-en-Y, Tooley and Tanner 19 EJ. Conclusion: Reconstruction with pouch performed after total gastrectomy is still a preferable method. (The Medical Bulletin of Haseki 2010; 48:126-31

  2. Smart Kd-values, their uncertainties and sensitivities - Applying a new approach for realistic distribution coefficients in geochemical modeling of complex systems.

    Science.gov (United States)

    Stockmann, M; Schikora, J; Becker, D-A; Flügge, J; Noseck, U; Brendler, V

    2017-08-23

    One natural retardation process to be considered in risk assessment for contaminants in the environment is sorption on mineral surfaces. A realistic geochemical modeling is of high relevance in many application areas such as groundwater protection, environmental remediation, or disposal of hazardous waste. Most often concepts with constant distribution coefficients (Kd-values) are applied in geochemical modeling with the advantage to be simple and computationally fast, but not reflecting changes in geochemical conditions. In this paper, we describe an innovative and efficient method, where the smart Kd-concept, a mechanistic approach mainly based on surface complexation modeling, is used (and modified for complex geochemical models) to calculate and apply realistic distribution coefficients. Using the geochemical speciation code PHREEQC, multidimensional smart Kd-matrices are computed as a function of varying (or uncertain) environmental conditions. On the one hand, sensitivity and uncertainty statements for the distribution coefficients can be derived. On the other hand, smart Kd-matrices can be used in reactive transport (or migration) codes (not shown here). This strategy has various benefits: (1) rapid computation of Kd-values for large numbers of environmental parameter combinations; (2) variable geochemistry is taken into account more realistically; (3) efficiency in computing time is ensured, and (4) uncertainty and sensitivity analysis are accessible. Results are presented exemplarily for the sorption of uranium(VI) onto a natural sandy aquifer material and are compared to results based on the conventional Kd-concept. In general, the sorption behavior of U(VI) in dependence of changing geochemical conditions is described quite well. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Image analysis technique applied to lock-exchange gravity currents

    OpenAIRE

    Nogueira, Helena; Adduce, Claudia; Alves, Elsa; Franca, Rodrigues Pereira Da; Jorge, Mario

    2013-01-01

    An image analysis technique is used to estimate the two-dimensional instantaneous density field of unsteady gravity currents produced by full-depth lock-release of saline water. An experiment reproducing a gravity current was performed in a 3.0 m long, 0.20 m wide and 0.30 m deep Perspex flume with horizontal smooth bed and recorded with a 25 Hz CCD video camera under controlled light conditions. Using dye concentration as a tracer, a calibration procedure was established for each pixel in th...

  4. Applying temporal network analysis to the venture capital market

    Science.gov (United States)

    Zhang, Xin; Feng, Ling; Zhu, Rongqian; Stanley, H. Eugene

    2015-10-01

    Using complex network theory to study the investment relationships of venture capital firms has produced a number of significant results. However, previous studies have often neglected the temporal properties of those relationships, which in real-world scenarios play a pivotal role. Here we examine the time-evolving dynamics of venture capital investment in China by constructing temporal networks to represent (i) investment relationships between venture capital firms and portfolio companies and (ii) the syndication ties between venture capital investors. The evolution of the networks exhibits rich variations in centrality, connectivity and local topology. We demonstrate that a temporal network approach provides a dynamic and comprehensive analysis of real-world networks.

  5. Methods of analysis applied on the e-shop Arsta

    OpenAIRE

    Flégl, Jan

    2013-01-01

    Bachelor thesis is focused on summarizing methods of e-shop analysis. The first chapter summarizes and describes the basics of e-commerce and e-shops in general. The second chapter deals with search engines, their functioning and in what ways it is possible to influence the order of search results. Special attention is paid to the optimization and search engine marketing. The third chapter summarizes basic tools of the Google Analytics. The fourth chapter uses findings of all the previous cha...

  6. Parametric sensitivity analysis for temperature control in outdoor photobioreactors.

    Science.gov (United States)

    Pereira, Darlan A; Rodrigues, Vinicius O; Gómez, Sonia V; Sales, Emerson A; Jorquera, Orlando

    2013-09-01

    In this study a critical analysis of input parameters on a model to describe the broth temperature in flat plate photobioreactors throughout the day is carried out in order to assess the effect of these parameters on the model. Using the design of experiment approach, variation of selected parameters was introduced and the influence of each parameter on the broth temperature was evaluated by a parametric sensitivity analysis. The results show that the major influence on the broth temperature is that from the reactor wall and the shading factor, both related to the direct and reflected solar irradiation. Other parameter which play an important role on the temperature is the distance between plates. This study provides information to improve the design and establish the most appropriate operating conditions for the cultivation of microalgae in outdoor systems.

  7. Sensitivity analysis of stochastically forced quasiperiodic self-oscillations

    Directory of Open Access Journals (Sweden)

    Irina Bashkirtseva

    2016-08-01

    Full Text Available We study a problem of stochastically forced quasi-periodic self-oscillations of nonlinear dynamic systems, which are modelled by an invariant torus in the phase space. For weak noise, an asymptotic of the stationary distribution of random trajectories is studied using the quasipotential. For the constructive analysis of a probabilistic distribution near a torus, we use a quadratic approximation of the quasipotential. A parametric description of this approximation is based on the stochastic sensitivity functions (SSF technique. Using this technique, we create a new mathematical method for the probabilistic analysis of stochastic flows near the torus. The construction of SSF is reduced to a boundary value problem for a linear differential matrix equation. For the case of the two-torus in the three-dimensional space, a constructive solution of this problem is given. Our theoretical results are illustrated with an example.

  8. Operational modal analysis applied to the concert harp

    Science.gov (United States)

    Chomette, B.; Le Carrou, J.-L.

    2015-05-01

    Operational modal analysis (OMA) methods are useful to extract modal parameters of operating systems. These methods seem to be particularly interesting to investigate the modal basis of string instruments during operation to avoid certain disadvantages due to conventional methods. However, the excitation in the case of string instruments is not optimal for OMA due to the presence of damped harmonic components and low noise in the disturbance signal. Therefore, the present study investigates the least-square complex exponential (LSCE) and the modified least-square complex exponential methods in the case of a string instrument to identify modal parameters of the instrument when it is played. The efficiency of the approach is experimentally demonstrated on a concert harp excited by some of its strings and the two methods are compared to a conventional modal analysis. The results show that OMA allows us to identify modes particularly present in the instrument's response with a good estimation especially if they are close to the excitation frequency with the modified LSCE method.

  9. Principles of micellar electrokinetic capillary chromatography applied in pharmaceutical analysis.

    Science.gov (United States)

    Hancu, Gabriel; Simon, Brigitta; Rusu, Aura; Mircia, Eleonora; Gyéresi, Arpád

    2013-01-01

    Since its introduction capillary electrophoresis has shown great potential in areas where electrophoretic techniques have rarely been used before, including here the analysis of pharmaceutical substances. The large majority of pharmaceutical substances are neutral from electrophoretic point of view, consequently separations by the classic capillary zone electrophoresis; where separation is based on the differences between the own electrophoretic mobilities of the analytes; are hard to achieve. Micellar electrokinetic capillary chromatography, a hybrid method that combines chromatographic and electrophoretic separation principles, extends the applicability of capillary electrophoretic methods to neutral analytes. In micellar electrokinetic capillary chromatography, surfactants are added to the buffer solution in concentration above their critical micellar concentrations, consequently micelles are formed; micelles that undergo electrophoretic migration like any other charged particle. The separation is based on the differential partitioning of an analyte between the two-phase system: the mobile aqueous phase and micellar pseudostationary phase. The present paper aims to summarize the basic aspects regarding separation principles and practical applications of micellar electrokinetic capillary chromatography, with particular attention to those relevant in pharmaceutical analysis.

  10. Dynamical Systems Analysis Applied to Working Memory Data

    Directory of Open Access Journals (Sweden)

    Fidan eGasimova

    2014-07-01

    Full Text Available In the present paper we investigate weekly fluctuations in the working memory capacity (WMC assessed over a period of two years. We use dynamical system analysis, specifically a second order linear differential equation, to model weekly variability in WMC in a sample of 112 9th graders. In our longitudinal data we use a B-spline imputation method to deal with missing data. The results show a significant negative frequency parameter in the data, indicating a cyclical pattern in weekly memory updating performance across time. We use a multilevel modeling approach to capture individual differences in model parameters and find that a higher initial performance level and a slower improvement at the MU task is associated with a slower frequency of oscillation. Additionally, we conduct a simulation study examining the analysis procedure’s performance using different numbers of B-spline knots and values of time delay embedding dimensions. Results show that the number of knots in the B-spline imputation influence accuracy more than the number of embedding dimensions.

  11. Principles of Micellar Electrokinetic Capillary Chromatography Applied in Pharmaceutical Analysis

    Directory of Open Access Journals (Sweden)

    Árpád Gyéresi

    2013-02-01

    Full Text Available Since its introduction capillary electrophoresis has shown great potential in areas where electrophoretic techniques have rarely been used before, including here the analysis of pharmaceutical substances. The large majority of pharmaceutical substances are neutral from electrophoretic point of view, consequently separations by the classic capillary zone electrophoresis; where separation is based on the differences between the own electrophoretic mobilities of the analytes; are hard to achieve. Micellar electrokinetic capillary chromatography, a hybrid method that combines chromatographic and electrophoretic separation principles, extends the applicability of capillary electrophoretic methods to neutral analytes. In micellar electrokinetic capillary chromatography, surfactants are added to the buffer solution in concentration above their critical micellar concentrations, consequently micelles are formed; micelles that undergo electrophoretic migration like any other charged particle. The separation is based on the differential partitioning of an analyte between the two-phase system: the mobile aqueous phase and micellar pseudostationary phase. The present paper aims to summarize the basic aspects regarding separation principles and practical applications of micellar electrokinetic capillary chromatography, with particular attention to those relevant in pharmaceutical analysis.

  12. Improved environmental multimedia modeling and its sensitivity analysis.

    Science.gov (United States)

    Yuan, Jing; Elektorowicz, Maria; Chen, Zhi

    2011-01-01

    Modeling of multimedia environmental issues is extremely complex due to the intricacy of the systems with the consideration of many factors. In this study, an improved environmental multimedia modeling is developed and a number of testing problems related to it are examined and compared with each other with standard numerical and analytical methodologies. The results indicate the flux output of new model is lesser in the unsaturated zone and groundwater zone compared with the traditional environmental multimedia model. Furthermore, about 90% of the total benzene flux was distributed to the air zone from the landfill sources and only 10% of the total flux emitted into the unsaturated, groundwater zones in non-uniform conditions. This paper also includes functions of model sensitivity analysis to optimize model parameters such as Peclet number (Pe). The analyses results show that the Pe can be considered as deterministic input variables for transport output. The oscillatory behavior is eliminated with the Pe decreased. In addition, the numerical methods are more accurate than analytical methods with the Pe increased. In conclusion, the improved environmental multimedia model system and its sensitivity analysis can be used to address the complex fate and transport of the pollutants in multimedia environments and then help to manage the environmental impacts.

  13. A Workflow for Global Sensitivity Analysis of PBPK Models

    Directory of Open Access Journals (Sweden)

    Kevin eMcNally

    2011-06-01

    Full Text Available Physiologically based pharmacokinetic models have a potentially significant role in the development of a reliable predictive toxicity testing strategy. The structure of PBPK models are ideal frameworks into which disparate in vitro and in vivo data can be integrated and utilised to translate information generated, using alternative to animal measures of toxicity and human biological monitoring data, into plausible corresponding exposures. However, these models invariably include the description of well known non-linear biological processes such as, enzyme saturation and interactions between parameters such as, organ mass and body mass. Therefore, an appropriate sensitivity analysis technique is required which can quantify the influences associated with individual parameters, interactions between parameters and any non-linear processes. In this report we have defined a workflow for sensitivity analysis of PBPK models that is computationally feasible, accounts for interactions between parameters, and can be displayed in the form of a bar chart and cumulative sum line (Lowry plot, which we believe is intuitive and appropriate for toxicologists, risk assessors and regulators.

  14. Orbit uncertainty propagation and sensitivity analysis with separated representations

    Science.gov (United States)

    Balducci, Marc; Jones, Brandon; Doostan, Alireza

    2017-09-01

    Most approximations for stochastic differential equations with high-dimensional, non-Gaussian inputs suffer from a rapid (e.g., exponential) increase of computational cost, an issue known as the curse of dimensionality. In astrodynamics, this results in reduced accuracy when propagating an orbit-state probability density function. This paper considers the application of separated representations for orbit uncertainty propagation, where future states are expanded into a sum of products of univariate functions of initial states and other uncertain parameters. An accurate generation of separated representation requires a number of state samples that is linear in the dimension of input uncertainties. The computation cost of a separated representation scales linearly with respect to the sample count, thereby improving tractability when compared to methods that suffer from the curse of dimensionality. In addition to detailed discussions on their construction and use in sensitivity analysis, this paper presents results for three test cases of an Earth orbiting satellite. The first two cases demonstrate that approximation via separated representations produces a tractable solution for propagating the Cartesian orbit-state uncertainty with up to 20 uncertain inputs. The third case, which instead uses Equinoctial elements, reexamines a scenario presented in the literature and employs the proposed method for sensitivity analysis to more thoroughly characterize the relative effects of uncertain inputs on the propagated state.

  15. Sensitivity analysis of numerical model of prestressed concrete containment

    Energy Technology Data Exchange (ETDEWEB)

    Bílý, Petr, E-mail: petr.bily@fsv.cvut.cz; Kohoutková, Alena, E-mail: akohout@fsv.cvut.cz

    2015-12-15

    Graphical abstract: - Highlights: • FEM model of prestressed concrete containment with steel liner was created. • Sensitivity analysis of changes in geometry and loads was conducted. • Steel liner and temperature effects are the most important factors. • Creep and shrinkage parameters are essential for the long time analysis. • Prestressing schedule is a key factor in the early stages. - Abstract: Safety is always the main consideration in the design of containment of nuclear power plant. However, efficiency of the design process should be also taken into consideration. Despite the advances in computational abilities in recent years, simplified analyses may be found useful for preliminary scoping or trade studies. In the paper, a study on sensitivity of finite element model of prestressed concrete containment to changes in geometry, loads and other factors is presented. Importance of steel liner, reinforcement, prestressing process, temperature changes, nonlinearity of materials as well as density of finite elements mesh is assessed in the main stages of life cycle of the containment. Although the modeling adjustments have not produced any significant changes in computation time, it was found that in some cases simplified modeling process can lead to significant reduction of work time without degradation of the results.

  16. Adjoint Sensitivity Analysis of Radiative Transfer Equation: Temperature and Gas Mixing Ratio Weighting Functions for Remote Sensing of Scattering Atmospheres in Thermal IR

    Science.gov (United States)

    Ustinov, E.

    1999-01-01

    Sensitivity analysis based on using of the adjoint equation of radiative transfer is applied to the case of atmospheric remote sensing in the thermal spectral region with non-negligeable atmospheric scattering.

  17. Independent comparison study of six different electronic tongues applied for pharmaceutical analysis.

    Science.gov (United States)

    Pein, Miriam; Kirsanov, Dmitry; Ciosek, Patrycja; del Valle, Manel; Yaroshenko, Irina; Wesoły, Małgorzata; Zabadaj, Marcin; Gonzalez-Calabuig, Andreu; Wróblewski, Wojciech; Legin, Andrey

    2015-10-10

    Electronic tongue technology based on arrays of cross-sensitive chemical sensors and chemometric data processing has attracted a lot of researchers' attention through the last years. Several so far reported applications dealing with pharmaceutical related tasks employed different e-tongue systems to address different objectives. In this situation, it is hard to judge on the benefits and drawbacks of particular e-tongue implementations for R&D in pharmaceutics. The objective of this study was to compare the performance of six different e-tongues applied to the same set of pharmaceutical samples. For this purpose, two commercially available systems (from Insent and AlphaMOS) and four laboratory prototype systems (two potentiometric systems from Warsaw operating in flow and static modes, one potentiometric system from St. Petersburg, one voltammetric system from Barcelona) were employed. The sample set addressed in the study comprised nine different formulations based on caffeine citrate, lactose monohydrate, maltodextrine, saccharin sodium and citric acid in various combinations. To provide for the fair and unbiased comparison, samples were evaluated under blind conditions and data processing from all the systems was performed in a uniform way. Different mathematical methods were applied to judge on similarity of the e-tongues response from the samples. These were principal component analysis (PCA), RV' matrix correlation coefficients and Tuckeŕs congruency coefficients.

  18. Downside Risk analysis applied to the Hedge Funds universe

    Science.gov (United States)

    Perelló, Josep

    2007-09-01

    Hedge Funds are considered as one of the portfolio management sectors which shows a fastest growing for the past decade. An optimal Hedge Fund management requires an appropriate risk metrics. The classic CAPM theory and its Ratio Sharpe fail to capture some crucial aspects due to the strong non-Gaussian character of Hedge Funds statistics. A possible way out to this problem while keeping the CAPM simplicity is the so-called Downside Risk analysis. One important benefit lies in distinguishing between good and bad returns, that is: returns greater or lower than investor's goal. We revisit most popular Downside Risk indicators and provide new analytical results on them. We compute these measures by taking the Credit Suisse/Tremont Investable Hedge Fund Index Data and with the Gaussian case as a benchmark. In this way, an unusual transversal lecture of the existing Downside Risk measures is provided.

  19. Neutron activation analysis applied to nutritional and foodstuff studies

    Energy Technology Data Exchange (ETDEWEB)

    Maihara, Vera A.; Santos, Paola S.; Moura, Patricia L.C.; Castro, Lilian P. de, E-mail: vmaihara@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Avegliano, Roseane P., E-mail: pagliaro@usp.b [Universidade de Sao Paulo (USP), SP (Brazil). Coordenadoria de Assistencia Social. Div. de Alimentacao

    2009-07-01

    Neutron Activation Analysis, NAA, has been successfully used on a regularly basis in several areas of nutrition and foodstuffs. NAA has become an important and useful research tool due to the methodology's advantages. These include high accuracy, small quantities of samples and no chemical treatment. This technique allows the determination of important elements directly related to human health. NAA also provides data concerning essential and toxic concentrations in foodstuffs and specific diets. In this paper some studies in the area of nutrition which have been carried out at the Neutron Activation Laboratory of IPEN/CNEN-SP will be presented: a Brazilian total diet study: nutritional element dietary intakes of Sao Paulo state population; a study of trace element in maternal milk and the determination of essential trace elements in some edible mushrooms. (author)

  20. Wavelets Applied to CMB Maps a Multiresolution Analysis for Denoising

    CERN Document Server

    Sanz, J L; Cayon, L; Martínez-González, E; Barriero, R B; Toffolatti, L

    1999-01-01

    Analysis and denoising of Cosmic Microwave Background (CMB) maps are performed using wavelet multiresolution techniques. The method is tested on $12^{\\circ}.8\\times 12^{\\circ}.8$ maps with resolution resembling the experimental one expected for future high resolution space observations. Semianalytic formulae of the variance of wavelet coefficients are given for the Haar and Mexican Hat wavelet bases. Results are presented for the standard Cold Dark Matter (CDM) model. Denoising of simulated maps is carried out by removal of wavelet coefficients dominated by instrumental noise. CMB maps with a signal-to-noise, $S/N \\sim 1$, are denoised with an error improvement factor between 3 and 5. Moreover we have also tested how well the CMB temperature power spectrum is recovered after denoising. We are able to reconstruct the $C_{\\ell}$'s up to $l\\sim 1500$ with errors always below $20% $ in cases with $S/N \\ge 1$.

  1. Applying importance-performance analysis to evaluate banking service quality

    Directory of Open Access Journals (Sweden)

    André Luís Policani Freitas

    2012-11-01

    Full Text Available In an increasingly competitive market, the identification of the most important aspects and the measurement of service quality as perceived by the customers are important actions taken by organizations which seek the competitive advantage. In particular, this scenario is typical of Brazilian banking sector. In this context, this article presents an exploratory case study in which the Importance-Performance Analysis (IPA was used to identify the strong and the weak points related to services provided by a bank. In order to check the reliability of the questionnaire, Cronbach's alpha and correlation analyses were used. The results are presented and some actions have been defined in order to improve the quality of services.

  2. Evaluation on mass sensitivity of SAW sensors for different piezoelectric materials using finite-element analysis.

    Science.gov (United States)

    Abdollahi, Amir; Jiang, Zhongwei; Arabshahi, Sayyed Alireza

    2007-12-01

    The mass sensitivity of the piezoelectric surface acoustic wave (SAW) sensors is an important factor in the selection of the best gravimetric sensors for different applications. To determine this value without facing the practical problems and the long theoretical calculation time, we have shown that the mass sensitivity of SAW sensors can be calculated by a simple three-dimensional (3-D) finite-element analysis (FEA) using a commercial finite-element platform. The FEA data are used to calculate the wave propagation speed, surface particle displacements, and wave energy distribution on different cuts of various piezoelectric materials. The results are used to provide a simple method for evaluation of their mass sensitivities. Meanwhile, to calculate more accurate results from FEA data, surface and bulk wave reflection problems are considered in the analyses. In this research, different cuts of lithium niobate, quartz, lithium tantalate, and langasite piezoelectric materials are applied to investigate their acoustic wave properties. Our analyses results for these materials have a good agreement with other researchers' results. Also, the mass sensitivity value for the novel cut of langasite was calculated through these analyses. It was found that its mass sensitivity is higher than that of the conventional Rayleigh mode quartz sensor.

  3. Sensitivity analysis of radiative transfer for atmospheric remote sensing in thermal IR: atmospheric weighting functions and surface partials

    Science.gov (United States)

    Ustinov, E. A.

    2003-01-01

    In this presentation, we apply the adjoint sensitivity analysis of radiative transfer in thermal IR to the general case of the analytic evaluation of the weighting functions of atmospheric parameters together with the partial derivatives for the surface parameters. Applications to remote sensing of atmospheres of Mars and Venus are discussed.

  4. A short-term in vitro test for tumour sensitivity to adriamycin based on flow cytometric DNA analysis

    DEFF Research Database (Denmark)

    Engelholm, S A; Spang-Thomsen, M; Vindeløv, L L

    1983-01-01

    A new method to test the sensitivity of tumour cells to chemotherapy is presented. Tumour cells were incubated in vitro on agar, and drug-induced cell cycle perturbation was monitored by flow cytometric DNA analysis. In the present study the method was applied to monitor the effect of adriamycin...

  5. Perturbation Method of Analysis Applied to Substitution Measurements of Buckling

    Energy Technology Data Exchange (ETDEWEB)

    Persson, Rolf

    1966-11-15

    Calculations with two-group perturbation theory on substitution experiments with homogenized regions show that a condensation of the results into a one-group formula is possible, provided that a transition region is introduced in a proper way. In heterogeneous cores the transition region comes in as a consequence of a new cell concept. By making use of progressive substitutions the properties of the transition region can be regarded as fitting parameters in the evaluation procedure. The thickness of the region is approximately equal to the sum of 1/(1/{tau} + 1/L{sup 2}){sup 1/2} for the test and reference regions. Consequently a region where L{sup 2} >> {tau}, e.g. D{sub 2}O, contributes with {radical}{tau} to the thickness. In cores where {tau} >> L{sup 2} , e.g. H{sub 2}O assemblies, the thickness of the transition region is determined by L. Experiments on rod lattices in D{sub 2}O and on test regions of D{sub 2}O alone (where B{sup 2} = - 1/L{sup 2} ) are analysed. The lattice measurements, where the pitches differed by a factor of {radical}2, gave excellent results, whereas the determination of the diffusion length in D{sub 2}O by this method was not quite successful. Even regions containing only one test element can be used in a meaningful way in the analysis.

  6. Applying revised gap analysis model in measuring hotel service quality.

    Science.gov (United States)

    Lee, Yu-Cheng; Wang, Yu-Che; Chien, Chih-Hung; Wu, Chia-Huei; Lu, Shu-Chiung; Tsai, Sang-Bing; Dong, Weiwei

    2016-01-01

    With the number of tourists coming to Taiwan growing by 10-20 % since 2010, the number has increased due to an increasing number of foreign tourists, particularly after deregulation allowed admitting tourist groups, followed later on by foreign individual tourists, from mainland China. The purpose of this study is to propose a revised gap model to evaluate and improve service quality in Taiwanese hotel industry. Thus, service quality could be clearly measured through gap analysis, which was more effective for offering direction in developing and improving service quality. The HOLSERV instrument was used to identify and analyze service gaps from the perceptions of internal and external customers. The sample for this study included three main categories of respondents: tourists, employees, and managers. The results show that five gaps influenced tourists' evaluations of service quality. In particular, the study revealed that Gap 1 (management perceptions vs. customer expectations) and Gap 9 (service provider perceptions of management perceptions vs. service delivery) were more critical than the others in affecting perceived service quality, making service delivery the main area of improvement. This study contributes toward an evaluation of the service quality of the Taiwanese hotel industry from the perspectives of customers, service providers, and managers, which is considerably valuable for hotel managers. It was the aim of this study to explore all of these together in order to better understand the possible gaps in the hotel industry in Taiwan.

  7. LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE

    Science.gov (United States)

    Bittker, D. A.

    1994-01-01

    LSENS has been developed for solving complex, homogeneous, gas-phase, chemical kinetics problems. The motivation for the development of this program is the continuing interest in developing detailed chemical reaction mechanisms for complex reactions such as the combustion of fuels and pollutant formation and destruction. A reaction mechanism is the set of all elementary chemical reactions that are required to describe the process of interest. Mathematical descriptions of chemical kinetics problems constitute sets of coupled, nonlinear, first-order ordinary differential equations (ODEs). The number of ODEs can be very large because of the numerous chemical species involved in the reaction mechanism. Further complicating the situation are the many simultaneous reactions needed to describe the chemical kinetics of practical fuels. For example, the mechanism describing the oxidation of the simplest hydrocarbon fuel, methane, involves over 25 species participating in nearly 100 elementary reaction steps. Validating a chemical reaction mechanism requires repetitive solutions of the governing ODEs for a variety of reaction conditions. Analytical solutions to the systems of ODEs describing chemistry are not possible, except for the simplest cases, which are of little or no practical value. Consequently, there is a need for fast and reliable numerical solution techniques for chemical kinetics problems. In addition to solving the ODEs describing chemical kinetics, it is often necessary to know what effects variations in either initial condition values or chemical reaction mechanism parameters have on the solution. Such a need arises in the development of reaction mechanisms from experimental data. The rate coefficients are often not known with great precision and in general, the experimental data are not sufficiently detailed to accurately estimate the rate coefficient parameters. The development of a reaction mechanism is facilitated by a systematic sensitivity analysis

  8. GPU-based Integration with Application in Sensitivity Analysis

    Science.gov (United States)

    Atanassov, Emanouil; Ivanovska, Sofiya; Karaivanova, Aneta; Slavov, Dimitar

    2010-05-01

    The presented work is an important part of the grid application MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aim is to develop an efficient Grid implementation of a Monte Carlo based approach for sensitivity studies in the domains of Environmental modelling and Environmental security. The goal is to study the damaging effects that can be caused by high pollution levels (especially effects on human health), when the main modeling tool is the Danish Eulerian Model (DEM). Generally speaking, sensitivity analysis (SA) is the study of how the variation in the output of a mathematical model can be apportioned to, qualitatively or quantitatively, different sources of variation in the input of a model. One of the important classes of methods for Sensitivity Analysis are Monte Carlo based, first proposed by Sobol, and then developed by Saltelli and his group. In MCSAES the general Saltelli procedure has been adapted for SA of the Danish Eulerian model. In our case we consider as factors the constants determining the speeds of the chemical reactions in the DEM and as output a certain aggregated measure of the pollution. Sensitivity simulations lead to huge computational tasks (systems with up to 4 × 109 equations at every time-step, and the number of time-steps can be more than a million) which motivates its grid implementation. MCSAES grid implementation scheme includes two main tasks: (i) Grid implementation of the DEM, (ii) Grid implementation of the Monte Carlo integration. In this work we present our new developments in the integration part of the application. We have developed an algorithm for GPU-based generation of scrambled quasirandom sequences which can be combined with the CPU-based computations related to the SA. Owen first proposed scrambling of Sobol sequence through permutation in a manner that improves the convergence rates. Scrambling is necessary not only for error analysis but for parallel implementations. Good scrambling is

  9. Probabilistic Sensitivity Analysis for Launch Vehicles with Varying Payloads and Adapters for Structural Dynamics and Loads

    Science.gov (United States)

    McGhee, David S.; Peck, Jeff A.; McDonald, Emmett J.

    2012-01-01

    This paper examines Probabilistic Sensitivity Analysis (PSA) methods and tools in an effort to understand their utility in vehicle loads and dynamic analysis. Specifically, this study addresses how these methods may be used to establish limits on payload mass and cg location and requirements on adaptor stiffnesses while maintaining vehicle loads and frequencies within established bounds. To this end, PSA methods and tools are applied to a realistic, but manageable, integrated launch vehicle analysis where payload and payload adaptor parameters are modeled as random variables. This analysis is used to study both Regional Response PSA (RRPSA) and Global Response PSA (GRPSA) methods, with a primary focus on sampling based techniques. For contrast, some MPP based approaches are also examined.

  10. Global in Time Analysis and Sensitivity Analysis for the Reduced NS- α Model of Incompressible Flow

    Science.gov (United States)

    Rebholz, Leo; Zerfas, Camille; Zhao, Kun

    2017-09-01

    We provide a detailed global in time analysis, and sensitivity analysis and testing, for the recently proposed (by the authors) reduced NS- α model. We extend the known analysis of the model to the global in time case by proving it is globally well-posed, and also prove some new results for its long time treatment of energy. We also derive PDE system that describes the sensitivity of the model with respect to the filtering radius parameter, and prove it is well-posed. An efficient numerical scheme for the sensitivity system is then proposed and analyzed, and proven to be stable and optimally accurate. Finally, two physically meaningful test problems are simulated: channel flow past a cylinder (including lift and drag calculations) and turbulent channel flow with {Re_{τ}=590}. The numerical results reveal that sensitivity is created near boundaries, and thus this is where the choice of the filtering radius is most critical.

  11. Robust and sensitive analysis of mouse knockout phenotypes.

    Directory of Open Access Journals (Sweden)

    Natasha A Karp

    Full Text Available A significant challenge of in-vivo studies is the identification of phenotypes with a method that is robust and reliable. The challenge arises from practical issues that lead to experimental designs which are not ideal. Breeding issues, particularly in the presence of fertility or fecundity problems, frequently lead to data being collected in multiple batches. This problem is acute in high throughput phenotyping programs. In addition, in a high throughput environment operational issues lead to controls not being measured on the same day as knockouts. We highlight how application of traditional methods, such as a Student's t-Test or a 2-way ANOVA, in these situations give flawed results and should not be used. We explore the use of mixed models using worked examples from Sanger Mouse Genome Project focusing on Dual-Energy X-Ray Absorptiometry data for the analysis of mouse knockout data and compare to a reference range approach. We show that mixed model analysis is more sensitive and less prone to artefacts allowing the discovery of subtle quantitative phenotypes essential for correlating a gene's function to human disease. We demonstrate how a mixed model approach has the additional advantage of being able to include covariates, such as body weight, to separate effect of genotype from these covariates. This is a particular issue in knockout studies, where body weight is a common phenotype and will enhance the precision of assigning phenotypes and the subsequent selection of lines for secondary phenotyping. The use of mixed models with in-vivo studies has value not only in improving the quality and sensitivity of the data analysis but also ethically as a method suitable for small batches which reduces the breeding burden of a colony. This will reduce the use of animals, increase throughput, and decrease cost whilst improving the quality and depth of knowledge gained.

  12. Improving the flash flood frequency analysis applying dendrogeomorphological evidences

    Science.gov (United States)

    Ruiz-Villanueva, V.; Ballesteros, J. A.; Bodoque, J. M.; Stoffel, M.; Bollschweiler, M.; Díez-Herrero, A.

    2009-09-01

    Flash floods are one of the natural hazards that cause major damages worldwide. Especially in Mediterranean areas they provoke high economic losses every year. In mountain areas with high stream gradients, floods events are characterized by extremely high flow and debris transport rates. Flash flood analysis in mountain areas presents specific scientific challenges. On one hand, there is a lack of information on precipitation and discharge due to a lack of spatially well distributed gauge stations with long records. On the other hand, gauge stations may not record correctly during extreme events when they are damaged or the discharge exceeds the recordable level. In this case, no systematic data allows improvement of the understanding of the spatial and temporal occurrence of the process. Since historic documentation is normally scarce or even completely missing in mountain areas, tree-ring analysis can provide an alternative approach. Flash floods may influence trees in different ways: (1) tilting of the stem through the unilateral pressure of the flowing mass or individual boulders; (2) root exposure through erosion of the banks; (3) injuries and scars caused by boulders and wood transported in the flow; (4) decapitation of the stem and resulting candelabra growth through the severe impact of boulders; (5) stem burial through deposition of material. The trees react to these disturbances with specific growth changes such as abrupt change of the yearly increment and anatomical changes like reaction wood or callus tissue. In this study, we sampled 90 cross sections and 265 increment cores of trees heavily affected by past flash floods in order to date past events and to reconstruct recurrence intervals in two torrent channels located in the Spanish Central System. The first study site is located along the Pelayo River, a torrent in natural conditions. Based on the external disturbances of trees and their geomorphological position, 114 Pinus pinaster (Ait

  13. Exploratory Factor Analysis as a Construct Validation Tool: (Mis)applications in Applied Linguistics Research

    Science.gov (United States)

    Karami, Hossein

    2015-01-01

    Factor analysis has been frequently exploited in applied research to provide evidence about the underlying factors in various measurement instruments. A close inspection of a large number of studies published in leading applied linguistic journals shows that there is a misconception among applied linguists as to the relative merits of exploratory…

  14. Applications of Conditional Nonlinear Optimal Perturbation in Predictability Study and Sensitivity Analysis of Weather and Climate

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Considering the limitation of the linear theory of singular vector (SV), the authors and their collaborators proposed conditional nonlinear optimal perturbation (CNOP) and then applied it in the predictability study and the sensitivity analysis of weather and climate system. To celebrate the 20th anniversary of Chinese National Committee for World Climate Research Programme (WCRP), this paper is devoted to reviewing the main results of these studies. First, CNOP represents the initial perturbation that has largest nonlinear evolution at prediction time, which is different from linear singular vector (LSV) for the large magnitude of initial perturbation or/and the long optimization time interval. Second, CNOP,rather than linear singular vector (LSV), represents the initial anomaly that evolves into ENSO events most probably. It is also the CNOP that induces the most prominent seasonal variation of error growth for ENSO predictability; furthermore, CNOP was applied to investigate the decadal variability of ENSO asymmetry. It is demonstrated that the changing nonlinearity causes the change of ENSO asymmetry.Third, in the studies of the sensitivity and stability of ocean's thermohaline circulation (THC), the non-linear asymmetric response of THC to finite amplitude of initial perturbations was revealed by CNOP.Through this approach the passive mechanism of decadal variation of THC was demonstrated; Also the authors studies the instability and sensitivity analysis of grassland ecosystem by using CNOP and show the mechanism of the transitions between the grassland and desert states. Finally, a detailed discussion on the results obtained by CNOP suggests the applicability of CNOP in predictability studies and sensitivity analysis.

  15. Application of finite-element sensitivities to power cable thermal field analysis

    Energy Technology Data Exchange (ETDEWEB)

    Al-Saud, M.S.; El-Kady, M.A.; Findlay, R.D. [McMaster Univ., Hamilton, ON (Canada). Dept. of Electrical and Computer Engineering

    2006-07-01

    A new approach for calculating the thermal field and ampacity of electrical cables was presented. The proposed perturbed finite-element analysis technique provides sensitivity information of the cable ampacity with respect to fluctuations in the cable thermal circuit parameters. As such, it can assess the effects on the permissible cable loading caused by these fluctuations without repeating the entire thermal analysis when parameters of the thermal circuit of power cables change according to geographical and seasonal variations. The technique can be applied to the design phase and the operational aspects of power cables buried in complex media of soil, heat sources and sinks or other variable boundary conditions. The sensitivity information is useful in determining the important and non-important parameter variations in terms of their relative effect on the cable temperature and ampacity. This paper described the analytical and computational aspects of the sensitivity methodology and demonstrated the usefulness of the developed methodology in 6 directly buried cable systems under different loading, soil and atmospheric conditions. The sensitivity results showed that the variations of the thermal conductivity of the soil affects the cable temperatures more than variations of other parameters. 8 refs., 5 tabs., 5 figs.

  16. Initial Considerations When Applying an Instructional Sensitivity Framework: Partitioning the Variation between and within Classrooms for Two Mathematics Assessments

    Science.gov (United States)

    Ing, Marsha

    2016-01-01

    Drawing inferences about the extent to which student performance reflects instructional opportunities relies on the premise that the measure of student performance is reflective of instructional opportunities. An instructional sensitivity framework suggests that some assessments are more sensitive to detecting differences in instructional…

  17. Motion and vibration control of a slewing flexible structure by SMA actuators and parameter sensitivity analysis

    Science.gov (United States)

    Janzen, F. C.; Tusset, A. M.; Piccirillo, V.; Balthazar, J. M.; Brasil, R. M. L. R. F.

    2015-11-01

    This work presents two approaches to the problem of vibration and positioning control of a flexible structural beam driven by a DC motor. The position is controlled by the current applied to the DC motor armature. A Shape Memory Alloy (SMA) actuator controls vibrations of the flexible structural beam. The State Dependent Riccati Equation (SDRE) technique is used to provide a control action which uses sub-optimal control and system local stability search. The robustness of these two controllers is tested by sensitivity analysis to parametric uncertainties. Numerical simulations results are presented to demonstrate the effectiveness of the proposed control strategy.

  18. Sensitivity Analysis and Uncertainty Quantification for the LAMMPS Molecular Dynamics Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Picard, Richard Roy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bhat, Kabekode Ghanasham [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-18

    We examine sensitivity analysis and uncertainty quantification for molecular dynamics simulation. Extreme (large or small) output values for the LAMMPS code often occur at the boundaries of input regions, and uncertainties in those boundary values are overlooked by common SA methods. Similarly, input values for which code outputs are consistent with calibration data can also occur near boundaries. Upon applying approaches in the literature for imprecise probabilities (IPs), much more realistic results are obtained than for the complacent application of standard SA and code calibration.

  19. Dynamic global sensitivity analysis in bioreactor networks for bioethanol production.

    Science.gov (United States)

    Ochoa, M P; Estrada, V; Di Maggio, J; Hoch, P M

    2016-01-01

    Dynamic global sensitivity analysis (GSA) was performed for three different dynamic bioreactor models of increasing complexity: a fermenter for bioethanol production, a bioreactors network, where two types of bioreactors were considered: aerobic for biomass production and anaerobic for bioethanol production and a co-fermenter bioreactor, to identify the parameters that most contribute to uncertainty in model outputs. Sobol's method was used to calculate time profiles for sensitivity indices. Numerical results have shown the time-variant influence of uncertain parameters on model variables. Most influential model parameters have been determined. For the model of the bioethanol fermenter, μmax (maximum growth rate) and Ks (half-saturation constant) are the parameters with largest contribution to model variables uncertainty; in the bioreactors network, the most influential parameter is μmax,1 (maximum growth rate in bioreactor 1); whereas λ (glucose-to-total sugars concentration ratio in the feed) is the most influential parameter over all model variables in the co-fermentation bioreactor.

  20. A Sensitivity Analysis Approach to Identify Key Environmental Performance Factors

    Directory of Open Access Journals (Sweden)

    Xi Yu

    2014-01-01

    Full Text Available Life cycle assessment (LCA is widely used in design phase to reduce the product’s environmental impacts through the whole product life cycle (PLC during the last two decades. The traditional LCA is restricted to assessing the environmental impacts of a product and the results cannot reflect the effects of changes within the life cycle. In order to improve the quality of ecodesign, it is a growing need to develop an approach which can reflect the changes between the design parameters and product’s environmental impacts. A sensitivity analysis approach based on LCA and ecodesign is proposed in this paper. The key environmental performance factors which have significant influence on the products’ environmental impacts can be identified by analyzing the relationship between environmental impacts and the design parameters. Users without much environmental knowledge can use this approach to determine which design parameter should be first considered when (redesigning a product. A printed circuit board (PCB case study is conducted; eight design parameters are chosen to be analyzed by our approach. The result shows that the carbon dioxide emission during the PCB manufacture is highly sensitive to the area of PCB panel.

  1. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  2. Sensitivity analysis on parameter changes in underground mine ventilation systems

    Institute of Scientific and Technical Information of China (English)

    LI Gary; KOCSIS Charles; HARDCASTLE Steve

    2011-01-01

    A more efficient mine ventilation system,the ventilation-on-demand (VOD) system,has been proposed and tested in Canadian mines recently.In order to supply the required air volumes to the production areas of a mine,operators need to know the cause and effect of any changes requested from the VOD system.The sensitivity analysis is developed through generating a cause and effect matrix of sensitivity factors on given parameter changes in a ventilation system.This new utility,which was incorporated in the 3D-CANVENT mine ventilation simulator,is able to predict the airflow distributions in a ventilation network when underground conditions and ventilation controls are changed.For a primary ventilation system,the software can determine the optimal operating speed of the main fans to satisfy the airflow requirements in underground workings without necessarily using booster fans and regulators locally.An optimized fan operating speed time-table would assure variable demand-based fresh air delivery to the production areas effectively,while generating significant savings in energy consumption and operating cost.

  3. Sensitivity analysis for high accuracy proximity effect correction

    Science.gov (United States)

    Thrun, Xaver; Browning, Clyde; Choi, Kang-Hoon; Figueiro, Thiago; Hohle, Christoph; Saib, Mohamed; Schiavone, Patrick; Bartha, Johann W.

    2015-10-01

    A sensitivity analysis (SA) algorithm was developed and tested to comprehend the influences of different test pattern sets on the calibration of a point spread function (PSF) model with complementary approaches. Variance-based SA is the method of choice. It allows attributing the variance of the output of a model to the sum of variance of each input of the model and their correlated factors.1 The objective of this development is increasing the accuracy of the resolved PSF model in the complementary technique through the optimization of test pattern sets. Inscale® from Aselta Nanographics is used to prepare the various pattern sets and to check the consequences of development. Fraunhofer IPMS-CNT exposed the prepared data and observed those to visualize the link of sensitivities between the PSF parameters and the test pattern. First, the SA can assess the influence of test pattern sets for the determination of PSF parameters, such as which PSF parameter is affected on the employments of certain pattern. Secondly, throughout the evaluation, the SA enhances the precision of PSF through the optimization of test patterns. Finally, the developed algorithm is able to appraise what ranges of proximity effect correction is crucial on which portion of a real application pattern in the electron beam exposure.

  4. Sensitivity Analysis in a Complex Marine Ecological Model

    Directory of Open Access Journals (Sweden)

    Marcos D. Mateus

    2015-05-01

    Full Text Available Sensitivity analysis (SA has long been recognized as part of best practices to assess if any particular model can be suitable to inform decisions, despite its uncertainties. SA is a commonly used approach for identifying important parameters that dominate model behavior. As such, SA address two elementary questions in the modeling exercise, namely, how sensitive is the model to changes in individual parameter values, and which parameters or associated processes have more influence on the results. In this paper we report on a local SA performed on a complex marine biogeochemical model that simulates oxygen, organic matter and nutrient cycles (N, P and Si in the water column, and well as the dynamics of biological groups such as producers, consumers and decomposers. SA was performed using a “one at a time” parameter perturbation method, and a color-code matrix was developed for result visualization. The outcome of this study was the identification of key parameters influencing model performance, a particularly helpful insight for the subsequent calibration exercise. Also, the color-code matrix methodology proved to be effective for a clear identification of the parameters with most impact on selected variables of the model.

  5. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-10-02

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  6. Sensitivity Analysis of Differential-Algebraic Equations and Partial Differential Equations

    Energy Technology Data Exchange (ETDEWEB)

    Petzold, L; Cao, Y; Li, S; Serban, R

    2005-08-09

    Sensitivity analysis generates essential information for model development, design optimization, parameter estimation, optimal control, model reduction and experimental design. In this paper we describe the forward and adjoint methods for sensitivity analysis, and outline some of our recent work on theory, algorithms and software for sensitivity analysis of differential-algebraic equation (DAE) and time-dependent partial differential equation (PDE) systems.

  7. Beyond Time out and Table Time: Today's Applied Behavior Analysis for Students with Autism

    Science.gov (United States)

    Boutot, E. Amanda; Hume, Kara

    2012-01-01

    Recent mandates related to the implementation of evidence-based practices for individuals with autism spectrum disorder (ASD) require that autism professionals both understand and are able to implement practices based on the science of applied behavior analysis (ABA). The use of the term "applied behavior analysis" and its related concepts…

  8. Sensitivity analysis for oblique incidence reflectometry using Monte Carlo simulations

    DEFF Research Database (Denmark)

    Kamran, Faisal; Andersen, Peter E.

    2015-01-01

    Oblique incidence reflectometry has developed into an effective, noncontact, and noninvasive measurement technology for the quantification of both the reduced scattering and absorption coefficients of a sample. The optical properties are deduced by analyzing only the shape of the reflectance...... profiles. This article presents a sensitivity analysis of the technique in turbid media. Monte Carlo simulations are used to investigate the technique and its potential to distinguish the small changes between different levels of scattering. We present various regions of the dynamic range of optical...... properties in which system demands vary to be able to detect subtle changes in the structure of the medium, translated as measured optical properties. Effects of variation in anisotropy are discussed and results presented. Finally, experimental data of milk products with different fat content are considered...

  9. A Multivariate Analysis of Extratropical Cyclone Environmental Sensitivity

    Science.gov (United States)

    Tierney, G.; Posselt, D. J.; Booth, J. F.

    2015-12-01

    The implications of a changing climate system include more than a simple temperature increase. A changing climate also modifies atmospheric conditions responsible for shaping the genesis and evolution of atmospheric circulations. In the mid-latitudes, the effects of climate change on extratropical cyclones (ETCs) can be expressed through changes in bulk temperature, horizontal and vertical temperature gradients (leading to changes in mean state winds) as well as atmospheric moisture content. Understanding how these changes impact ETC evolution and dynamics will help to inform climate mitigation and adaptation strategies, and allow for better informed weather emergency planning. However, our understanding is complicated by the complex interplay between a variety of environmental influences, and their potentially opposing effects on extratropical cyclone strength. Attempting to untangle competing influences from a theoretical or observational standpoint is complicated by nonlinear responses to environmental perturbations and a lack of data. As such, numerical models can serve as a useful tool for examining this complex issue. We present results from an analysis framework that combines the computational power of idealized modeling with the statistical robustness of multivariate sensitivity analysis. We first establish control variables, such as baroclinicity, bulk temperature, and moisture content, and specify a range of values that simulate possible changes in a future climate. The Weather Research and Forecasting (WRF) model serves as the link between changes in climate state and ETC relevant outcomes. A diverse set of output metrics (e.g., sea level pressure, average precipitation rates, eddy kinetic energy, and latent heat release) facilitates examination of storm dynamics, thermodynamic properties, and hydrologic cycles. Exploration of the multivariate sensitivity of ETCs to changes in control parameters space is performed via an ensemble of WRF runs coupled with

  10. Uncertainty and sensitivity analysis for photovoltaic system modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk

    2013-12-01

    We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.

  11. Sensitivity analysis on various parameters for lattice analysis of DUPIC fuel with WIMS-AECL code

    Energy Technology Data Exchange (ETDEWEB)

    Roh, Gyu Hong; Choi, Hang Bok; Park, Jee Won [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    The code WIMS-AECL has been used for the lattice analysis of DUPIC fuel. The lattice parameters calculated by the code is sensitive to the choice of number of parameters, such as the number of tracking lines, number of condensed groups, mesh spacing in the moderator region, other parameters vital to the calculation of probabilities and burnup analysis. We have studied this sensitivity with respect to these parameters and recommend their proper values which are necessary for carrying out the lattice analysis of DUPIC fuel.

  12. Comprehensive, Population-Based Sensitivity Analysis of a Two-Mass Vocal Fold Model.

    Directory of Open Access Journals (Sweden)

    Daniel Robertson

    Full Text Available Previous vocal fold modeling studies have generally focused on generating detailed data regarding a narrow subset of possible model configurations. These studies can be interpreted to be the investigation of a single subject under one or more vocal conditions. In this study, a broad population-based sensitivity analysis is employed to examine the behavior of a virtual population of subjects and to identify trends between virtual individuals as opposed to investigating a single subject or model instance. Four different sensitivity analysis techniques were used in accomplishing this task. Influential relationships between model input parameters and model outputs were identified, and an exploration of the model's parameter space was conducted. Results indicate that the behavior of the selected two-mass model is largely dominated by complex interactions, and that few input-output pairs have a consistent effect on the model. Results from the analysis can be used to increase the efficiency of optimization routines of reduced-order models used to investigate voice abnormalities. Results also demonstrate the types of challenges and difficulties to be expected when applying sensitivity analyses to more complex vocal fold models. Such challenges are discussed and recommendations are made for future studies.

  13. Geometrical parameter analysis of the high sensitivity fiber optic angular displacement sensor

    CERN Document Server

    Sakamoto, João M S; Kitano, Cláudio; Tittmann, Bernhard R

    2015-01-01

    In this work, we present an analysis of the influence of the geometrical parameters on the sensitivity and linear range of the fiber optic angular displacement sensor, through computational simulations and experiments. The geometrical parameters analyzed were the lens focal length, the gap between fibers, the fibers cladding radii, the emitting fiber critical angle (or, equivalently, the emitting fiber numerical aperture), and the standoff distance (distance between the lens and the reflective surface). Besides, we analyzed the sensor sensitivity regarding any spurious linear displacement. The simulation and experimental results showed that the parameters which play the most important roles are the emitting fiber core radius, the lens focal length, and the light coupling efficiency, while the remaining parameters have little influence on sensor characteristics. This paper was published in Applied Optics and is made available as an electronic reprint with the permission of OSA. The paper can be found at the fo...

  14. Sensitivity analysis of size effect on the performance of hydrostatic bearing

    Directory of Open Access Journals (Sweden)

    Dongju Chen

    2016-01-01

    Full Text Available For size effect on solid-liquid interface of hydrostatic bearing oil film gap flow in two-dimension, fluid dynamic method is applied to investigate the influence of size effect on bearing capacity, dynamic stiffness and other performances. With the consideration of size effect, Reynolds equation is modified by adopting velocity slip boundary condition into Reynolds equation. The sensitivity factors are used to make a quantitative and qualitative analysis. Numerical simulation results show that size effect will affect bearing performances to a certain degree and the effect curve of size effect on bearing performances are given. The four maximum oil film pressures reduce with the increase of slip length. The maximum sensitivity of bearing capacity is 81.94%.

  15. Time-dependent global sensitivity analysis with active subspaces for a lithium ion battery model

    CERN Document Server

    Constantine, Paul G

    2016-01-01

    Renewable energy researchers use computer simulation to aid the design of lithium ion storage devices. The underlying models contain several physical input parameters that affect model predictions. Effective design and analysis must understand the sensitivity of model predictions to changes in model parameters, but global sensitivity analyses become increasingly challenging as the number of input parameters increases. Active subspaces are part of an emerging set of tools to reveal and exploit low-dimensional structures in the map from high-dimensional inputs to model outputs. We extend a linear model-based heuristic for active subspace discovery to time-dependent processes and apply the resulting technique to a lithium ion battery model. The results reveal low-dimensional structure that a designer may exploit to efficiently study the relationship between parameters and predictions.

  16. Sensitivity Analysis of the Bone Fracture Risk Model

    Science.gov (United States)

    Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane

    2017-01-01

    Introduction: The probability of bone fracture during and after spaceflight is quantified to aid in mission planning, to determine required astronaut fitness standards and training requirements and to inform countermeasure research and design. Probability is quantified with a probabilistic modeling approach where distributions of model parameter values, instead of single deterministic values, capture the parameter variability within the astronaut population and fracture predictions are probability distributions with a mean value and an associated uncertainty. Because of this uncertainty, the model in its current state cannot discern an effect of countermeasures on fracture probability, for example between use and non-use of bisphosphonates or between spaceflight exercise performed with the Advanced Resistive Exercise Device (ARED) or on devices prior to installation of ARED on the International Space Station. This is thought to be due to the inability to measure key contributors to bone strength, for example, geometry and volumetric distributions of bone mass, with areal bone mineral density (BMD) measurement techniques. To further the applicability of model, we performed a parameter sensitivity study aimed at identifying those parameter uncertainties that most effect the model forecasts in order to determine what areas of the model needed enhancements for reducing uncertainty. Methods: The bone fracture risk model (BFxRM), originally published in (Nelson et al) is a probabilistic model that can assess the risk of astronaut bone fracture. This is accomplished by utilizing biomechanical models to assess the applied loads; utilizing models of spaceflight BMD loss in at-risk skeletal locations; quantifying bone strength through a relationship between areal BMD and bone failure load; and relating fracture risk index (FRI), the ratio of applied load to bone strength, to fracture probability. There are many factors associated with these calculations including

  17. Stability and sensitivity analysis of experimental data for passive control of a turbulent wake

    Science.gov (United States)

    Siconolfi, Lorenzo; Camarri, Simone; Trip, Renzo; Fransson, Jens H. M.

    2016-11-01

    When the linear stability analysis is applied to the mean flow field past a bluff body, a quasi-marginally stable mode is identified, with a frequency very close to the real vortex shedding one. A formally consistent approach to justify this kind of analysis is based on a triple decomposition of the flow variables. With this formalism, the adjoint-based sensitivity analysis can be extended to investigate passive controls of high-Reynolds-number wakes (e.g.). The objective of the present work is to predict the effect of a small control cylinder on the vortex shedding frequency in a turbulent wake with an analysis which solely relies on PIV measurements available for the considered flow. The key ingredient of the numerical analysis is an ad-hoc tuned model for the mean flow field, built using an original procedure which includes all the experimental information available on the flow. This analysis is here applied to the wake flow past a thick porous plate at Reynolds numbers in the range between Re = 6 . 7 ×103 and Re= 5 . 3 ×104 . It is shown that the derived control map agrees reasonably well with the equivalent map obtained experimentally.

  18. Sensitivity analysis, approximate analysis, and design optimization for internal and external viscous flows

    Science.gov (United States)

    Taylor, Arthur C., III; Hou, Gene W.; Korivi, Vamshi M.

    1991-01-01

    A gradient-based design optimization strategy for practical aerodynamic design applications is presented, which uses the 2D thin-layer Navier-Stokes equations. The strategy is based on the classic idea of constructing different modules for performing the major tasks such as function evaluation, function approximation and sensitivity analysis, mesh regeneration, and grid sensitivity analysis, all driven and controlled by a general-purpose design optimization program. The accuracy of aerodynamic shape sensitivity derivatives is validated on two viscous test problems: internal flow through a double-throat nozzle and external flow over a NACA 4-digit airfoil. A significant improvement in aerodynamic performance has been achieved in both cases. Particular attention is given to a consistent treatment of the boundary conditions in the calculation of the aerodynamic sensitivity derivatives for the classic problems of external flow over an isolated lifting airfoil on 'C' or 'O' meshes.

  19. Emergency Load Shedding Strategy Based on Sensitivity Analysis of Relay Operation Margin against Cascading Events

    DEFF Research Database (Denmark)

    Liu, Zhou; Chen, Zhe; Sun, Haishun Sun;

    2012-01-01

    In order to prevent long term voltage instability and induced cascading events, a load shedding strategy based on the sensitivity of relay operation margin to load powers is discussed and proposed in this paper. The operation margin of critical impedance backup relay is defined to identify the ru...... into account to compensate load shedding amount calculation. And the multi-agent technology is applied for the whole strategy implementation. A test system is built in real time digital simulator (RTDS) and has demonstrated the effectiveness of the proposed strategy.......In order to prevent long term voltage instability and induced cascading events, a load shedding strategy based on the sensitivity of relay operation margin to load powers is discussed and proposed in this paper. The operation margin of critical impedance backup relay is defined to identify...... the runtime emergent states of related system component. Based on sensitivity analysis between the relay operation margin and power system state variables, an optimal load shedding strategy is applied to adjust the emergent states timely before the unwanted relay operation. Load dynamics is also taken...

  20. Evaluation of fire weather forecasts using PM2.5 sensitivity analysis

    Science.gov (United States)

    Balachandran, Sivaraman; Baumann, Karsten; Pachon, Jorge E.; Mulholland, James A.; Russell, Armistead G.

    2017-01-01

    Fire weather forecasts are used by land and wildlife managers to determine when meteorological and fuel conditions are suitable to conduct prescribed burning. In this work, we investigate the sensitivity of ambient PM2.5 to various fire and meteorological variables in a spatial setting that is typical for the southeastern US, where prescribed fires are the single largest source of fine particulate matter. We use the method of principle components regression to estimate sensitivity of PM2.5, measured at a monitoring site in Jacksonville, NC (JVL), to fire data and observed and forecast meteorological variables. Fire data were gathered from prescribed fire activity used for ecological management at Marine Corps Base Camp Lejeune, extending 10-50 km south from the PM2.5 monitor. Principal components analysis (PCA) was run on 10 data sets that included acres of prescribed burning activity (PB) along with meteorological forecast data alone or in combination with observations. For each data set, observed PM2.5 (unitless) was regressed against PCA scores from the first seven principal components (explaining at least 80% of total variance). PM2.5 showed significant sensitivity to PB: 3.6 ± 2.2 μg m-3 per 1000 acres burned at the investigated distance scale of ∼10-50 km. Applying this sensitivity to the available activity data revealed a prescribed burning source contribution to measured PM2.5 of up to 25% on a given day. PM2.5 showed a positive sensitivity to relative humidity and temperature, and was also sensitive to wind direction, indicating the capture of more regional aerosol processing and transport effects. As expected, PM2.5 had a negative sensitivity to dispersive variables but only showed a statistically significant negative sensitivity to ventilation rate, highlighting the importance of this parameter to fire managers. A positive sensitivity to forecast precipitation was found, consistent with the practice of conducting prescribed burning on days when rain

  1. Uncertainty and sensitivity analysis of the retrieved essential climate variables from remotely sensed observations

    Science.gov (United States)

    Djepa, Vera; Badii, Atta

    2016-04-01

    The sensitivity of weather and climate system to sea ice thickness (SIT), Sea Ice Draft (SID) and Snow Depth (SD) in the Arctic is recognized from various studies. Decrease in SIT will affect atmospheric circulation, temperature, precipitation and wind speed in the Arctic and beyond. Ice thermodynamics and dynamic properties depend strongly on sea Ice Density (ID) and SD. SIT, SID, ID and SD are sensitive to environmental changes in the Polar region and impact the climate system. For accurate forecast of climate change, sea ice mass balance, ocean circulation and sea- atmosphere interactions it is required to have long term records of SIT, SID, SD and ID with errors and uncertainty analyses. The SID, SIT, ID and freeboard (F) have been retrieved from Radar Altimeter (RA) (on board ENVISAT) and IceBridge Laser Altimeter (LA) and validated, using over 10 years -collocated observations of SID and SD in the Arctic, provided from the European Space Agency (ESA CCI sea ice ECV project). Improved algorithms to retrieve SIT from LA and RA have been derived, applying statistical analysis. The snow depth is obtained from AMSR-E/Aqua and NASA IceBridge Snow Depth radar. The sea ice properties of pancake ice have been retrieved from ENVISAT/Synthetic Aperture Radar (ASAR). The uncertainties of the retrieved climate variables have been analysed and the impact of snow depth and sea ice density on retrieved SIT has been estimated. The sensitivity analysis illustrates the impact of uncertainties of input climate variables (ID and SD) on accuracy of the retrieved output variables (SIT and SID). The developed methodology of uncertainty and sensitivity analysis is essential for assessment of the impact of environmental variables on climate change and better understanding of the relationship between input and output variables. The uncertainty analysis quantifies the uncertainties of the model results and the sensitivity analysis evaluates the contribution of each input variable to

  2. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    Science.gov (United States)

    Mai, Juliane; Tolson, Bryan

    2017-04-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different

  3. An Ultra-Sensitive Method for the Analysis of Perfluorinated ...

    Science.gov (United States)

    In epidemiological research, it has become increasingly important to assess subjects' exposure to different classes of chemicals in multiple environmental media. It is a common practice to aliquot limited volumes of samples into smaller quantities for specific trace level chemical analysis. A novel method was developed for the determination of 14 perfluorinated alkyl acids (PFAAs) in small volumes (10 mL) of drinking water using off-line solid phase extraction (SPE) pre-treatment followed by on-line pre-concentration on WAX column before analysis on column-switching high performance liquid chromatography tandem mass spectrometry (HPLC-MS/MS). In general, large volumes (100 - 1000 mL) have been used for the analysis of PFAAs in drinking water. The current method requires approximately 10 mL of drinking water concentrated by using an SPE cartridge and eluted with methanol. A large volume injection of the extract was introduced on to a column-switching HPLC-MS/MS using a mix-mode SPE column for the trace level analysis of PFAAs in water. The recoveries for most of the analytes in the fortified laboratory blanks ranged from 73±14% to 128±5%. The lowest concentration minimum reporting levels (LCMRL) for the 14 PFAAs ranged from 0.59 to 3.4 ng/L. The optimized method was applied to a pilot-scale analysis of a subset of drinking water samples from an epidemiological study. These samples were collected directly from the taps in the households of Ohio and Nor

  4. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  5. The antimicrobial sensitivity of Streptococcus mutans and Streptococcus sangius to colloidal solutions of different nanoparticles applied as mouthwashes

    Directory of Open Access Journals (Sweden)

    Farzaneh Ahrari

    2015-01-01

    Full Text Available Background: Metal nanoparticles have been recently applied in dentistry because of their antibacterial properties. This study aimed to evaluate antibacterial effects of colloidal solutions containing zinc oxide (ZnO, copper oxide (CuO, titanium dioxide (TiO 2 and silver (Ag nanoparticles on Streptococcus mutans and Streptococcus sangius and compare the results with those of chlorhexidine and sodium fluoride mouthrinses. Materials and Methods: After adding nanoparticles to a water-based solution, six groups were prepared. Groups I to IV included colloidal solutions containing nanoZnO, nanoCuO, nanoTiO 2 and nanoAg, respectively. Groups V and VI consisted of 2.0% sodium fluoride and 0.2% chlorhexidine mouthwashes, respectively as controls. We used serial dilution method to find minimum inhibitory concentrations (MICs and with subcultures obtained minimum bactericidal concentrations (MBCs of the solutions against S. mutans and S. sangius. The data were analyzed by analysis of variance and Duncan test and P < 0.05 was considered as significant. Results: The sodium fluoride mouthrinse did not show any antibacterial effect. The nanoTiO 2 -containing solution had the lowest MIC against both microorganisms and also displayed the lowest MBC against S. mutans (P < 0.05. The colloidal solutions containing nanoTiO 2 and nanoZnO showed the lowest MBC against S. sangius (P < 0.05. On the other hand, chlorhexidine showed the highest MIC and MBC against both streptococci (P < 0.05. Conclusion: The nanoTiO 2 -containing mouthwash proved to be an effective antimicrobial agent and thus it can be considered as an alternative to chlorhexidine or sodium fluoride mouthrinses in the oral cavity provided the lack of cytotoxic and genotoxic effects on biologic tissues.

  6. Spatial risk assessment for critical network infrastructure using sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    Michael M·derl; Wolfgang Rauch

    2011-01-01

    The presented spatial risk assessment method allows for managing critical network infrastructure in urban areas under abnormal and future conditions caused e.g.,by terrorist attacks,infrastructure deterioration or climate change.For the spatial risk assessment,vulnerability maps for critical network infrastructure are merged with hazard maps for an interfering process.Vulnerability maps are generated using a spatial sensitivity analysis of network transport models to evaluate performance decrease under investigated thread scenarios.Thereby parameters are varied according to the specific impact of a particular threat scenario.Hazard maps are generated with a geographical information system using raster data of the same threat scenario derived from structured interviews and cluster analysis of events in the past.The application of the spatial risk assessment is exemplified by means of a case study for a water supply system,but the principal concept is applicable likewise to other critical network infrastructure.The aim of the approach is to help decision makers in choosing zones for preventive measures.

  7. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    Science.gov (United States)

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. Comb structure analysis of the capacitive sensitive element in MEMS-accelerometer

    Science.gov (United States)

    Shalimov, Andrew; Timoshenkov, Sergey; Korobova, Natalia; Golovinskiy, Maxim; Timoshenkov, Alexey; Zuev, Egor; Berezueva, Svetlana; Kosolapov, Andrey

    2015-05-01

    In this paper analysis of comb design for the sensing element MEMS accelerometer with longitudinal displacement of the inertial mass under the influence of acceleration to obtain the necessary parameters for the further construction of an electronic circuit for removal and signal processing has been done. Fixed on the stator the inertia mass has the ability to move under the influence of acceleration along the longitudinal structure. As a result the distance between the fixed and movable combs, and hence the capacitance in the capacitors have been changed. Measuring the difference of these capacitances you can estimate the value of the applied acceleration. Furthermore, managing combs that should apply an electrostatic force for artificial deviation of the inertial mass may be used for the initial sensitive elements culling. Also in this case there is a change of capacitances, which can be measured by the comb and make a decision about the spoilage presence or absence.

  9. Introduction to applied statistical signal analysis guide to biomedical and electrical engineering applications

    CERN Document Server

    Shiavi, Richard

    2007-01-01

    Introduction to Applied Statistical Signal Analysis is designed for the experienced individual with a basic background in mathematics, science, and computer. With this predisposed knowledge, the reader will coast through the practical introduction and move on to signal analysis techniques, commonly used in a broad range of engineering areas such as biomedical engineering, communications, geophysics, and speech.Introduction to Applied Statistical Signal Analysis intertwines theory and implementation with practical examples and exercises. Topics presented in detail include: mathematical

  10. Can feedback analysis be used to uncover the physical origin of climate sensitivity and efficacy differences?

    Science.gov (United States)

    Rieger, Vanessa S.; Dietmüller, Simone; Ponater, Michael

    2016-12-01

    Different strengths and types of radiative forcings cause variations in the climate sensitivities and efficacies. To relate these changes to their physical origin, this study tests whether a feedback analysis is a suitable approach. For this end, we apply the partial radiative perturbation method. Combining the forward and backward calculation turns out to be indispensable to ensure the additivity of feedbacks and to yield a closed forcing-feedback-balance at top of the atmosphere. For a set of CO2-forced simulations, the climate sensitivity changes with increasing forcing. The albedo, cloud and combined water vapour and lapse rate feedback are found to be responsible for the variations in the climate sensitivity. An O3-forced simulation (induced by enhanced NOx and CO surface emissions) causes a smaller efficacy than a CO2-forced simulation with a similar magnitude of forcing. We find that the Planck, albedo and most likely the cloud feedback are responsible for this effect. Reducing the radiative forcing impedes the statistical separability of feedbacks. We additionally discuss formal inconsistencies between the common ways of comparing climate sensitivities and feedbacks. Moreover, methodical recommendations for future work are given.

  11. Evaluation of the Potential Sensitization of Chlorogenic Acid: A Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Mingbao Lin

    2013-01-01

    Full Text Available Chlorogenic acid (CGA widely exists in many plants, which are used as medicinal substances in traditional Chinese medicine injectables (TCMIs that have been widely applied in clinical treatments. However, it is still controversial whether CGA is responsible for TCMIs-related hypersensitivity. Several studies have been performed to evaluate its potential sensitization property, but the results were inconclusive. Therefore, the aim of this study was to evaluate its potential sensitization systematically using meta-analysis based on data extracted from literatures, searching databases of PubMed, EMBASE, ISI Web of Knowledge, CNKI, VIP, and CHINAINFO from January 1979 to October 2012, a total of 108 articles were retrieved by electronic search strategy, out of which 13 studies met the inclusion criteria. In ASA test, odds ratio of behavior changes was 4.33 (1.62, 11.60, showing significant changes after CGA treatment (P=0.004. Serum IgG, serum histamine, PLN cellularity, and IgG1 AFCs were significantly enhanced after CGA treatment (P<0.05. Totally, these results indicated that CGA could induce a positive reaction in potential sensitization, and intravenous administration of it might be a key factor for sensitization triggering, which could at least warrant more careful application of TCMIs containing CGA in clinical practices.

  12. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA

    2016-04-08

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance

  13. Stochastic sensitivity analysis of periodic attractors in non-autonomous nonlinear dynamical systems based on stroboscopic map

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Kong-Ming, E-mail: kmguo@xidian.edu.cn [School of Electromechanical Engineering, Xidian University, P.O. Box 187, Xi' an 710071 (China); Jiang, Jun, E-mail: jun.jiang@mail.xjtu.edu.cn [State Key Laboratory for Strength and Vibration, Xi' an Jiaotong University, Xi' an 710049 (China)

    2014-07-04

    To apply stochastic sensitivity function method, which can estimate the probabilistic distribution of stochastic attractors, to non-autonomous dynamical systems, a 1/N-period stroboscopic map for a periodic motion is constructed in order to discretize the continuous cycle into a discrete one. In this way, the sensitivity analysis of a cycle for discrete map can be utilized and a numerical algorithm for the stochastic sensitivity analysis of periodic solutions of non-autonomous nonlinear dynamical systems under stochastic disturbances is devised. An external excited Duffing oscillator and a parametric excited laser system are studied as examples to show the validity of the proposed method. - Highlights: • A method to analyze sensitivity of stochastic periodic attractors in non-autonomous dynamical systems is proposed. • Probabilistic distribution around periodic attractors in an external excited Φ{sup 6} Duffing system is obtained. • Probabilistic distribution around a periodic attractor in a parametric excited laser system is determined.

  14. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error-based weighting and one objective function

    Science.gov (United States)

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall-runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error-based weighting of observation and prior information data, local sensitivity analysis, and single-objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  15. Chapter 5: Modulation Excitation Spectroscopy with Phase-Sensitive Detection for Surface Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Shulda, Sarah; Richards, Ryan M.

    2016-02-19

    Advancements in in situ spectroscopic techniques have led to significant progress being made in elucidating heterogeneous reaction mechanisms. The potential of these progressive methods is often limited only by the complexity of the system and noise in the data. Short-lived intermediates can be challenging, if not impossible, to identify with conventional spectra analysis means. Often equally difficult is separating signals that arise from active and inactive species. Modulation excitation spectroscopy combined with phase-sensitive detection analysis is a powerful tool for removing noise from the data while simultaneously revealing the underlying kinetics of the reaction. A stimulus is applied at a constant frequency to the reaction system, for example, a reactant cycled with an inert phase. Through mathematical manipulation of the data, any signal contributing to the overall spectra but not oscillating with the same frequency as the stimulus will be dampened or removed. With phase-sensitive detection, signals oscillating with the stimulus frequency but with various lag times are amplified providing valuable kinetic information. In this chapter, some examples are provided from the literature that have successfully used modulation excitation spectroscopy with phase-sensitive detection to uncover previously unobserved reaction intermediates and kinetics. Examples from a broad range of spectroscopic methods are included to provide perspective to the reader.

  16. Topical glucocorticoid augments both allergic and non-allergic cutaneous reactions in mice when applied at the afferent stage of contact sensitivity

    Directory of Open Access Journals (Sweden)

    Ken Igawa

    1997-01-01

    Full Text Available Using a murine model, topical application of glucocorticoid ([GC], 50 μg diflucortolone valerate in ethanol on a sensitized site (flank skin for 7 times before and 2 times after sensitization on alternate days, augmented expression of contact sensitivity reactions on the challenged site (ear skin. This augmentation was due to the systemic effect of percutaneously absorbed GC, because topical GC also augmented the skin reaction in mice that had been sensitized on a separate site from that of the GC application. In contrast, topical application of GC inhibited the contact sensitivity skin reaction when applied on the challenged sites. Intraperitoneal injection of the same dose of GC also failed to augment the skin reactions. Glucocorticoid augmented the contact sensitivity skin reactions and these persisted for 96 h after the control skin reactions subsided. Early phase (1–6 h skin reactions were also induced or augmented when dinitrofluorobenzene or trinitrochlorobenzene but not oxazolone were used as the sensitizer; GC also augmented the non-specific reactions to croton oil or to suboptimal concentration of hapten in normal mice. The numbers of Langerhans cells (LC were reduced in both the GC-application and challenged sites. Haptenated LC from GC-treated skin showed a rather weak sensitizing ability, which was not statistically significant. Transfer of lymph node cells and/or spleen cells or serum from GC-pre-treated mice failed to induce a contact sensitivity reaction in normal recipient mice. These results suggest that topical GC might augment cutaneous inflammation through a possible modulation of local cytokine production, regardless of the number of LC or the presence of sensitized lymphocytes.

  17. Numerical daemons in hydrological modeling: Effects on uncertainty assessment, sensitivity analysis and model predictions

    Science.gov (United States)

    Kavetski, D.; Clark, M. P.; Fenicia, F.

    2011-12-01

    Hydrologists often face sources of uncertainty that dwarf those normally encountered in many engineering and scientific disciplines. Especially when representing large scale integrated systems, internal heterogeneities such as stream networks, preferential flowpaths, vegetation, etc, are necessarily represented with a considerable degree of lumping. The inputs to these models are themselves often the products of sparse observational networks. Given the simplifications inherent in environmental models, especially lumped conceptual models, does it really matter how they are implemented? At the same time, given the complexities usually found in the response surfaces of hydrological models, increasingly sophisticated analysis methodologies are being proposed for sensitivity analysis, parameter calibration and uncertainty assessment. Quite remarkably, rather than being caused by the model structure/equations themselves, in many cases model analysis complexities are consequences of seemingly trivial aspects of the model implementation - often, literally, whether the start-of-step or end-of-step fluxes are used! The extent of problems can be staggering, including (i) degraded performance of parameter optimization and uncertainty analysis algorithms, (ii) erroneous and/or misleading conclusions of sensitivity analysis, parameter inference and model interpretations and, finally, (iii) poor reliability of a calibrated model in predictive applications. While the often nontrivial behavior of numerical approximations has long been recognized in applied mathematics and in physically-oriented fields of environmental sciences, it remains a problematic issue in many environmental modeling applications. Perhaps detailed attention to numerics is only warranted for complicated engineering models? Would not numerical errors be an insignificant component of total uncertainty when typical data and model approximations are present? Is this really a serious issue beyond some rare isolated

  18. What Do We Mean By Sensitivity Analysis? The Need For A Comprehensive Characterization Of Sensitivity In Earth System Models

    Science.gov (United States)

    Razavi, S.; Gupta, H. V.

    2014-12-01

    Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.

  19. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based "local" methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative "bucket-style" hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  20. Spherical harmonic decomposition applied to spatial-temporal analysis of human high-density EEG

    CERN Document Server

    Wingeier, B M; Silberstein, R B; Wingeier, Brett M.; Nunez, Paul L.; Silberstein, Richard B.

    2001-01-01

    We demonstrate an application of spherical harmonic decomposition to analysis of the human electroencephalogram (EEG). We implement two methods and discuss issues specific to analysis of hemispherical, irregularly sampled data. Performance of the methods and spatial sampling requirements are quantified using simulated data. The analysis is applied to experimental EEG data, confirming earlier reports of an approximate frequency-wavenumber relationship in some bands.

  1. Spherical harmonic decomposition applied to spatial-temporal analysis of human high-density EEG

    OpenAIRE

    Wingeier, Brett M.; Nunez, Paul L.; Silberstein, Richard B.

    2000-01-01

    We demonstrate an application of spherical harmonic decomposition to analysis of the human electroencephalogram (EEG). We implement two methods and discuss issues specific to analysis of hemispherical, irregularly sampled data. Performance of the methods and spatial sampling requirements are quantified using simulated data. The analysis is applied to experimental EEG data, confirming earlier reports of an approximate frequency-wavenumber relationship in some bands.

  2. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  3. SENSITIVITY ANALYSIS BASED ON LANCZOS ALGORITHM IN STRUCTURAL DYNAMICS

    Institute of Scientific and Technical Information of China (English)

    李书; 王波; 胡继忠

    2003-01-01

    The sensitivity calculating formulas in structural dynamics was developed byutilizing the mathematical theorem and new definitions of sensitivities. So the singularityproblem of sensitivity with repeated eigenvalues is solved completely. To improve thecomputational efficiency, the reduction system is obtained based on Lanczos vectors. Afterincorporating the mathematical theory with the Lanczos algorithm, the approximatesensitivity solution can be obtained. A numerical example is presented to illustrate theperformance of the method.

  4. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    Science.gov (United States)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-01-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  5. Sensitivity Analysis for the CLIC Damping Ring Inductive Adder

    CERN Document Server

    Holma, Janne

    2012-01-01

    The CLIC study is exploring the scheme for an electron-positron collider with high luminosity and a nominal centre-of-mass energy of 3 TeV. The CLIC pre-damping rings and damping rings will produce, through synchrotron radiation, ultra-low emittance beam with high bunch charge, necessary for the luminosity performance of the collider. To limit the beam emittance blow-up due to oscillations, the pulse generators for the damping ring kickers must provide extremely flat, high-voltage pulses. The specifications for the extraction kickers of the CLIC damping rings are particularly demanding: the flattop of the output pulse must be 160 ns duration, 12.5 kV and 250 A, with a combined ripple and droop of not more than ±0.02 %. An inductive adder allows the use of different modulation techniques and is therefore a very promising approach to meeting the specifications. PSpice has been utilised to carry out a sensitivity analysis of the predicted output pulse to the value of both individual and groups of circuit compon...

  6. A Sensitivity Analysis of fMRI Balloon Model

    KAUST Repository

    Zayane, Chadia

    2015-04-22

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  7. Sensitivity analysis of surface runoff generation in urban flood forecasting.

    Science.gov (United States)

    Simões, N E; Leitão, J P; Maksimović, C; Sá Marques, A; Pina, R

    2010-01-01

    Reliable flood forecasting requires hydraulic models capable to estimate pluvial flooding fast enough in order to enable successful operational responses. Increased computational speed can be achieved by using a 1D/1D model, since 2D models are too computationally demanding. Further changes can be made by simplifying 1D network models, removing and by changing some secondary elements. The Urban Water Research Group (UWRG) of Imperial College London developed a tool that automatically analyses, quantifies and generates 1D overland flow network. The overland flow network features (ponds and flow pathways) generated by this methodology are dependent on the number of sewer network manholes and sewer inlets, as some of the overland flow pathways start at manholes (or sewer inlets) locations. Thus, if a simplified version of the sewer network has less manholes (or sewer inlets) than the original one, the overland flow network will be consequently different. This paper compares different overland flow networks generated with different levels of sewer network skeletonisation. Sensitivity analysis is carried out in one catchment area in Coimbra, Portugal, in order to evaluate overland flow network characteristics.

  8. Plans for a sensitivity analysis of bridge-scour computations

    Science.gov (United States)

    Dunn, David D.; Smith, Peter N.

    1993-01-01

    Plans for an analysis of the sensitivity of Level 2 bridge-scour computations are described. Cross-section data from 15 bridge sites in Texas are modified to reflect four levels of field effort ranging from no field surveys to complete surveys. Data from United States Geological Survey (USGS) topographic maps will be used to supplement incomplete field surveys. The cross sections are used to compute the water-surface profile through each bridge for several T-year recurrence-interval design discharges. The effect of determining the downstream energy grade-line slope from topographic maps is investigated by systematically varying the starting slope of each profile. The water-surface profile analyses are then used to compute potential scour resulting from each of the design discharges. The planned results will be presented in the form of exceedance-probability versus scour-depth plots with the maximum and minimum scour depths at each T-year discharge presented as error bars.

  9. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    Science.gov (United States)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-03-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  10. GPS/INS Integration: A Performance Sensitivity Analysis

    Institute of Scientific and Technical Information of China (English)

    Wang Jin-ling; H. K. Lee; C. Rizos

    2003-01-01

    Inertial Navigation System (INS) and Global Positioning System (GPS) technologies have been widely used in a variety of positioning and navigation applications. Both systems have their unique features and shortcomings. Therefore, the integration of GPS with INS is now critical to overcome each of their drawbacks and to maximize each of their benefits. The integration of GPS with INS can be implemented using a Kalman filter in such modes as loosely, tightly and ultra-tightly coupled. In all these integration modes the INS error states, together with any navigation state (position, velocity, attitude) and other unknown parameters of interest, are estimated using GPS measurements. In a high performance system it is expected that all these unknown states will be precisely estimated. Although it has been noted that both the quality of the GPS measurements and the trajectory and/or manoeuvre characteristics of the problem will have impacts on system performance, a systematic sensitivity analysis is still lacking. This paper will address this issue through real data analyses. The performance analysisresults are very relevant to system design and platform trajectory and/or manoeuvre optimisation.

  11. Sensitivity analysis on parameters and processes affecting vapor intrusion risk.

    Science.gov (United States)

    Picone, Sara; Valstar, Johan; van Gaans, Pauline; Grotenhuis, Tim; Rijnaarts, Huub

    2012-05-01

    A one-dimensional numerical model was developed and used to identify the key processes controlling vapor intrusion risks by means of a sensitivity analysis. The model simulates the fate of a dissolved volatile organic compound present below the ventilated crawl space of a house. In contrast to the vast majority of previous studies, this model accounts for vertical variation of soil water saturation and includes aerobic biodegradation. The attenuation factor (ratio between concentration in the crawl space and source concentration) and the characteristic time to approach maximum concentrations were calculated and compared for a variety of scenarios. These concepts allow an understanding of controlling mechanisms and aid in the identification of critical parameters to be collected for field situations. The relative distance of the source to the nearest gas-filled pores of the unsaturated zone is the most critical parameter because diffusive contaminant transport is significantly slower in water-filled pores than in gas-filled pores. Therefore, attenuation factors decrease and characteristic times increase with increasing relative distance of the contaminant dissolved source to the nearest gas diffusion front. Aerobic biodegradation may decrease the attenuation factor by up to three orders of magnitude. Moreover, the occurrence of water table oscillations is of importance. Dynamic processes leading to a retreating water table increase the attenuation factor by two orders of magnitude because of the enhanced gas phase diffusion.

  12. Sensitivity analysis on parameters and processes affecting vapor intrusion risk

    KAUST Repository

    Picone, Sara

    2012-03-30

    A one-dimensional numerical model was developed and used to identify the key processes controlling vapor intrusion risks by means of a sensitivity analysis. The model simulates the fate of a dissolved volatile organic compound present below the ventilated crawl space of a house. In contrast to the vast majority of previous studies, this model accounts for vertical variation of soil water saturation and includes aerobic biodegradation. The attenuation factor (ratio between concentration in the crawl space and source concentration) and the characteristic time to approach maximum concentrations were calculated and compared for a variety of scenarios. These concepts allow an understanding of controlling mechanisms and aid in the identification of critical parameters to be collected for field situations. The relative distance of the source to the nearest gas-filled pores of the unsaturated zone is the most critical parameter because diffusive contaminant transport is significantly slower in water-filled pores than in gas-filled pores. Therefore, attenuation factors decrease and characteristic times increase with increasing relative distance of the contaminant dissolved source to the nearest gas diffusion front. Aerobic biodegradation may decrease the attenuation factor by up to three orders of magnitude. Moreover, the occurrence of water table oscillations is of importance. Dynamic processes leading to a retreating water table increase the attenuation factor by two orders of magnitude because of the enhanced gas phase diffusion. © 2012 SETAC.

  13. Sensitivity analysis on an AC600 aluminum skin component

    Science.gov (United States)

    Mendiguren, J.; Agirre, J.; Mugarra, E.; Galdos, L.; Saenz de Argandoña, E.

    2016-08-01

    New materials are been introduced on the car body in order to reduce weight and fulfil the international CO2 emission regulations. Among them, the application of aluminum alloys is increasing for skin panels. Even if these alloys are beneficial for the car design, the manufacturing of these components become more complex. In this regard, numerical simulations have become a necessary tool for die designers. There are multiple factors affecting the accuracy of these simulations e.g. hardening, anisotropy, lubrication, elastic behavior. Numerous studies have been conducted in the last years on high strength steels component stamping and on developing new anisotropic models for aluminum cup drawings. However, the impact of the correct modelling on the latest aluminums for the manufacturing of skin panels has been not yet analyzed. In this work, first, the new AC600 aluminum alloy of JLR-Novelis is characterized for anisotropy, kinematic hardening, friction coefficient, elastic behavior. Next, a sensitivity analysis is conducted on the simulation of a U channel (with drawbeads). Then, the numerical an experimental results are correlated in terms of springback and failure. Finally, some conclusions are drawn.

  14. Understanding earth system models: how Global Sensitivity Analysis can help

    Science.gov (United States)

    Pianosi, Francesca; Wagener, Thorsten

    2017-04-01

    Computer models are an essential element of earth system sciences, underpinning our understanding of systems functioning and influencing the planning and management of socio-economic-environmental systems. Even when these models represent a relatively low number of physical processes and variables, earth system models can exhibit a complicated behaviour because of the high level of interactions between their simulated variables. As the level of these interactions increases, we quickly lose the ability to anticipate and interpret the model's behaviour and hence the opportunity to check whether the model gives the right response for the right reasons. Moreover, even if internally consistent, an earth system model will always produce uncertain predictions because it is often forced by uncertain inputs (due to measurement errors, pre-processing uncertainties, scarcity of measurements, etc.). Lack of transparency about the scope of validity, limitations and the main sources of uncertainty of earth system models can be a strong limitation to their effective use for both scientific and decision-making purposes. Global Sensitivity Analysis (GSA) is a set of statistical analysis techniques to investigate the complex behaviour of earth system models in a structured, transparent and comprehensive way. In this presentation, we will use a range of examples across earth system sciences (with a focus on hydrology) to demonstrate how GSA is a fundamental element in advancing the construction and use of earth system models, including: verifying the consistency of the model's behaviour with our conceptual understanding of the system functioning; identifying the main sources of output uncertainty so to focus efforts for uncertainty reduction; finding tipping points in forcing inputs that, if crossed, would bring the system to specific conditions we want to avoid.

  15. Model Proposition for the Fiscal Policies Analysis Applied in Economic Field

    Directory of Open Access Journals (Sweden)

    Larisa Preda

    2007-05-01

    Full Text Available This paper presents a study about fiscal policy applied in economic development. Correlations between macroeconomics and fiscal indicators signify the first steep in our analysis. Next step is a new model proposal for the fiscal and budgetary choices. This model is applied on the date of the Romanian case.

  16. Research in progress in applied mathematics, numerical analysis, and computer science

    Science.gov (United States)

    1990-01-01

    Research conducted at the Institute in Science and Engineering in applied mathematics, numerical analysis, and computer science is summarized. The Institute conducts unclassified basic research in applied mathematics in order to extend and improve problem solving capabilities in science and engineering, particularly in aeronautics and space.

  17. Global sensitivity analysis and uncertainties in SEA models of vibroacoustic systems

    Science.gov (United States)

    Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan

    2017-06-01

    The effect of parametric uncertainties on the dispersion of Statistical Energy Analysis (SEA) models of structural-acoustic coupled systems is studied with the Fourier analysis sensitivity test (FAST) method. The method is firstly applied to an academic example representing a transmission suite, then to a more complex industrial structure from the space industry. Two sets of parameters are considered, namely error on the SEA model's coefficients, or directly the engineering parameters. The first case is an intrusive approach, but enables to identify the dominant phenomena taking place in a given configuration. The second is non-intrusive and appeals more to engineering considerations, by studying the effect of input parameters such as geometry or material characteristics on the SEA outputs. A study of the distribution of results in each frequency band with the same sampling shows some interesting features, such as bimodal repartitions in some ranges.

  18. Sensitivity analysis of FBMC-based multi-cellular networks to synchronization errors and HPA nonlinearities

    Science.gov (United States)

    Elmaroud, Brahim; Faqihi, Ahmed; Aboutajdine, Driss

    2017-01-01

    In this paper, we study the performance of asynchronous and nonlinear FBMC-based multi-cellular networks. The considered system includes a reference mobile perfectly synchronized with its reference base station (BS) and K interfering BSs. Both synchronization errors and high-power amplifier (HPA) distortions will be considered and a theoretical analysis of the interference signal will be conducted. On the basis of this analysis, we will derive an accurate expression of signal-to-noise-plus-interference ratio (SINR) and bit error rate (BER) in the presence of a frequency-selective channel. In order to reduce the computational complexity of the BER expression, we applied an interesting lemma based on the moment generating function of the interference power. Finally, the proposed model is evaluated through computer simulations which show a high sensitivity of the asynchronous FBMC-based multi-cellular network to HPA nonlinear distortions.

  19. Sensitivity Analysis of Wind Plant Performance to Key Turbine Design Parameters: A Systems Engineering Approach; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Dykes, K.; Ning, A.; King, R.; Graf, P.; Scott, G.; Veers, P.

    2014-02-01

    This paper introduces the development of a new software framework for research, design, and development of wind energy systems which is meant to 1) represent a full wind plant including all physical and nonphysical assets and associated costs up to the point of grid interconnection, 2) allow use of interchangeable models of varying fidelity for different aspects of the system, and 3) support system level multidisciplinary analyses and optimizations. This paper describes the design of the overall software capability and applies it to a global sensitivity analysis of wind turbine and plant performance and cost. The analysis was performed using three different model configurations involving different levels of fidelity, which illustrate how increasing fidelity can preserve important system interactions that build up to overall system performance and cost. Analyses were performed for a reference wind plant based on the National Renewable Energy Laboratory's 5-MW reference turbine at a mid-Atlantic offshore location within the United States.

  20. Sensitivity analysis of a time-delayed thermo-acoustic system via an adjoint-based approach

    CERN Document Server

    Magri, Luca

    2013-01-01

    We apply adjoint-based sensitivity analysis to a time-delayed thermo-acoustic system: a Rijke tube containing a hot wire. We calculate how the growth rate and frequency of small oscillations about a base state are affected either by a generic passive control element in the system (the structural sensitivity analysis) or by a generic change to its base state (the base-state sensitivity analysis). We illustrate the structural sensitivity by calculating the effect of a second hot wire with a small heat release parameter. In a single calculation, this shows how the second hot wire changes the growth rate and frequency of the small oscillations, as a function of its position in the tube. We then examine the components of the structural sensitivity in order to determine the passive control mechanism that has the strongest influence on the growth rate. We find that a force applied to the acoustic momentum equation in the opposite direction to the instantaneous velocity is the most stabilizing feedback mechanism. We ...

  1. Using genomic DNA-based probe-selection to improve the sensitivity of high-density oligonucleotide arrays when applied to heterologous species

    Directory of Open Access Journals (Sweden)

    Townsend Henrik J

    2005-11-01

    Full Text Available Abstract High-density oligonucleotide (oligo arrays are a powerful tool for transcript profiling. Arrays based on GeneChip® technology are amongst the most widely used, although GeneChip® arrays are currently available for only a small number of plant and animal species. Thus, we have developed a method to improve the sensitivity of high-density oligonucleotide arrays when applied to heterologous species and tested the method by analysing the transcriptome of Brassica oleracea L., a species for which no GeneChip® array is available, using a GeneChip® array designed for Arabidopsis thaliana (L. Heynh. Genomic DNA from B. oleracea was labelled and hybridised to the ATH1-121501 GeneChip® array. Arabidopsis thaliana probe-pairs that hybridised to the B. oleracea genomic DNA on the basis of the perfect-match (PM probe signal were then selected for subsequent B. oleracea transcriptome analysis using a .cel file parser script to generate probe mask files. The transcriptional response of B. oleracea to a mineral nutrient (phosphorus; P stress was quantified using probe mask files generated for a wide range of gDNA hybridisation intensity thresholds. An example probe mask file generated with a gDNA hybridisation intensity threshold of 400 removed > 68 % of the available PM probes from the analysis but retained >96 % of available A. thaliana probe-sets. Ninety-nine of these genes were then identified as significantly regulated under P stress in B. oleracea, including the homologues of P stress responsive genes in A. thaliana. Increasing the gDNA hybridisation intensity thresholds up to 500 for probe-selection increased the sensitivity of the GeneChip® array to detect regulation of gene expression in B. oleracea under P stress by up to 13-fold. Our open-source software to create probe mask files is freely available http://affymetrix.arabidopsis.info/xspecies/ and may be used to facilitate transcriptomic analyses of a wide range of plant and animal

  2. Measurement and Analysis on Neutron Position Sensitive Detector at CARR

    Institute of Scientific and Technical Information of China (English)

    GAO; Jian-bo; HAO; Li-jie; LIU; Xin-zhi; MA; Xiao-bai; LI; Yu-qing

    2013-01-01

    Neutron position sensitive detector is one of the key components for neutron scattering spectrometer.As the eyes of the spectrometer,the detector is mainly used for recording the position and intensity of the neutrons.The 16 linear position sensitive detectors from GE Reuter-Stokes Company have been measured

  3. An educationally inspired illustration of two-dimensional Quantitative Microbiological Risk Assessment (QMRA) and sensitivity analysis.

    Science.gov (United States)

    Vásquez, G A; Busschaert, P; Haberbeck, L U; Uyttendaele, M; Geeraerd, A H

    2014-11-03

    Quantitative Microbiological Risk Assessment (QMRA) is a structured methodology used to assess the risk involved by ingestion of a pathogen. It applies mathematical models combined with an accurate exploitation of data sets, represented by distributions and - in the case of two-dimensional Monte Carlo simulations - their hyperparameters. This research aims to highlight background information, assumptions and truncations of a two-dimensional QMRA and advanced sensitivity analysis. We believe that such a detailed listing is not always clearly presented in actual risk assessment studies, while it is essential to ensure reliable and realistic simulations and interpretations. As a case-study, we are considering the occurrence of listeriosis in smoked fish products in Belgium during the period 2008-2009, using two-dimensional Monte Carlo and two sensitivity analysis methods (Spearman correlation and Sobol sensitivity indices) to estimate the most relevant factors of the final risk estimate. A risk estimate of 0.018% per consumption of contaminated smoked fish by an immunocompromised person was obtained. The final estimate of listeriosis cases (23) is within the actual reported result obtained for the same period and for the same population. Variability on the final risk estimate is determined by the variability regarding (i) consumer refrigerator temperatures, (ii) the reference growth rate of L. monocytogenes, (iii) the minimum growth temperature of L. monocytogenes and (iv) consumer portion size. Variability regarding the initial contamination level of L. monocytogenes tends to appear as a determinant of risk variability only when the minimum growth temperature is not included in the sensitivity analysis; when it is included the impact regarding the variability on the initial contamination level of L. monocytogenes is disappearing. Uncertainty determinants of the final risk indicated the need of gathering more information on the reference growth rate and the minimum

  4. Performance analysis of high quality parallel preconditioners applied to 3D finite element structural analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kolotilina, L.; Nikishin, A.; Yeremin, A. [and others

    1994-12-31

    The solution of large systems of linear equations is a crucial bottleneck when performing 3D finite element analysis of structures. Also, in many cases the reliability and robustness of iterative solution strategies, and their efficiency when exploiting hardware resources, fully determine the scope of industrial applications which can be solved on a particular computer platform. This is especially true for modern vector/parallel supercomputers with large vector length and for modern massively parallel supercomputers. Preconditioned iterative methods have been successfully applied to industrial class finite element analysis of structures. The construction and application of high quality preconditioners constitutes a high percentage of the total solution time. Parallel implementation of high quality preconditioners on such architectures is a formidable challenge. Two common types of existing preconditioners are the implicit preconditioners and the explicit preconditioners. The implicit preconditioners (e.g. incomplete factorizations of several types) are generally high quality but require solution of lower and upper triangular systems of equations per iteration which are difficult to parallelize without deteriorating the convergence rate. The explicit type of preconditionings (e.g. polynomial preconditioners or Jacobi-like preconditioners) require sparse matrix-vector multiplications and can be parallelized but their preconditioning qualities are less than desirable. The authors present results of numerical experiments with Factorized Sparse Approximate Inverses (FSAI) for symmetric positive definite linear systems. These are high quality preconditioners that possess a large resource of parallelism by construction without increasing the serial complexity.

  5. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Directory of Open Access Journals (Sweden)

    W. Castaings

    2009-04-01

    Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.

    In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.

    It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.

    For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.

    Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  6. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Science.gov (United States)

    Castaings, W.; Dartus, D.; Le Dimet, F.-X.; Saulnier, G.-M.

    2009-04-01

    Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised) with respect to model inputs. In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations) but didactic application case. It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run) and the singular value decomposition (SVD) of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation. For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers) is adopted. Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  7. Sensitivity Analysis on LOCCW of Westinghouse typed Reactors Considering WOG2000 RCP Seal Leakage Model

    Energy Technology Data Exchange (ETDEWEB)

    Na, Jang-Hwan; Jeon, Ho-Jun; Hwang, Seok-Won [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In this paper, we focus on risk insights of Westinghouse typed reactors. We identified that Reactor Coolant Pump (RCP) seal integrity is the most important contributor to Core Damage Frequency (CDF). As we reflected the latest technical report; WCAP-15603(Rev. 1-A), 'WOG2000 RCP Seal Leakage Model for Westinghouse PWRs' instead of the old version, RCP seal integrity became more important to Westinghouse typed reactors. After Fukushima accidents, Korea Hydro and Nuclear Power (KHNP) decided to develop Low Power and Shutdown (LPSD) Probabilistic Safety Assessment (PSA) models and upgrade full power PSA models of all operating Nuclear Power Plants (NPPs). As for upgrading full power PSA models, we have tried to standardize the methodology of CCF (Common Cause Failure) and HRA (Human Reliability Analysis), which are the most influential factors to risk measures of NPPs. Also, we have reviewed and reflected the latest operating experiences, reliability data sources and technical methods to improve the quality of PSA models. KHNP has operating various types of reactors; Optimized Pressurized Reactor (OPR) 1000, CANDU, Framatome and Westinghouse. So, one of the most challengeable missions is to keep the balance of risk contributors of all types of reactors. This paper presents the method of new RCP seal leakage model and the sensitivity analysis results from applying the detailed method to PSA models of Westinghouse typed reference reactors. To perform the sensitivity analysis on LOCCW of the reference Westinghouse typed reactors, we reviewed WOG2000 RCP seal leakage model and developed the detailed event tree of LOCCW considering all scenarios of RCP seal failures. Also, we performed HRA based on the T/H analysis by using the leakage rates for each scenario. We could recognize that HRA was the sensitive contributor to CDF, and the RCP seal failure scenario of 182gpm leakage rate was estimated as the most important scenario.

  8. Robust Stability Clearance of Flight Control Law Based on Global Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Liuli Ou

    2014-01-01

    Full Text Available To validate the robust stability of the flight control system of hypersonic flight vehicle, which suffers from a large number of parametrical uncertainties, a new clearance framework based on structural singular value (μ theory and global uncertainty sensitivity analysis (SA is proposed. In this framework, SA serves as the preprocess of uncertain model to be analysed to help engineers to determine which uncertainties affect the stability of the closed loop system more slightly. By ignoring these unimportant uncertainties, the calculation of μ can be simplified. Instead of analysing the effect of uncertainties on μ which involves solving optimal problems repeatedly, a simpler stability analysis function which represents the effect of uncertainties on closed loop poles is proposed. Based on this stability analysis function, Sobol’s method, the most widely used global SA method, is extended and applied to the new clearance framework due to its suitability for system with strong nonlinearity and input factors varying in large interval, as well as input factors subjecting to random distributions. In this method, the sensitive indices can be estimated via Monte Carlo simulation conveniently. An example is given to illustrate the efficiency of the proposed method.

  9. Sensitivity- and effort-gain analysis: multilead ECG electrode array selection for activation time imaging.

    Science.gov (United States)

    Hintermüller, Christoph; Seger, Michael; Pfeifer, Bernhard; Fischer, Gerald; Modre, Robert; Tilg, Bernhard

    2006-10-01

    Methods for noninvasive imaging of electric function of the heart might become clinical standard procedure the next years. Thus, the overall procedure has to meet clinical requirements as an easy and fast application. In this paper, we propose a new electrode array which improves the resolution of methods for activation time imaging considering clinical constraints such as easy to apply and compatibility with routine leads. For identifying the body-surface regions where the body surface potential (BSP) is most sensitive to changes in transmembrane potential (TMP), a virtual array method was used to compute local linear dependency (LLD) maps. The virtual array method computes a measure for the LLD in every point on the body surface. The most suitable number and position of the electrodes within the sensitive body surface regions was selected by constructing effort gain (EG) plots. Such a plot depicts the relative attainable rank of the leadfield matrix in relation to the increase in number of electrodes required to build the electrode array. The attainable rank itself was computed by a detector criterion. Such a criterion estimates the maximum number of source space eigenvectors not covered by noise when being mapped to the electrode space by the leadfield matrix and recorded by a detector. From the sensitivity maps, we found that the BSP is most sensitive to changes in TMP on the upper left frontal and dorsal body surface. These sensitive regions are covered best by an electrode array consisting of two L-shaped parts of approximately 30 cm x 30 cm and approximately 20 cm x 20 cm. The EG analysis revealed that the array meeting clinical requirements best and improving the resolution of activation time imaging consists of 125 electrodes with a regular horizontal and vertical spacing of 2-3 cm.

  10. Computation and analysis of time-dependent sensitivities in Generalized Mass Action systems.

    Science.gov (United States)

    Schwacke, John H; Voit, Eberhard O

    2005-09-07

    Understanding biochemical system dynamics is becoming increasingly important for insights into the functioning of organisms and for biotechnological manipulations, and additional techniques and methods are needed to facilitate investigations of dynamical properties of systems. Extensions to the method of Ingalls and Sauro, addressing time-dependent sensitivity analysis, provide a new tool for executing such investigations. We present here the results of sample analyses using time-dependent sensitivities for three model systems taken from the literature, namely an anaerobic fermentation pathway in yeast, a negative feedback oscillator modeling cell-cycle phenomena, and the Mitogen Activated Protein (MAP) kinase cascade. The power of time-dependent sensitivities is particularly evident in the case of the MAPK cascade. In this example it is possible to identify the emergence of a concentration of MAPKK that provides the best response with respect to rapid and efficient activation of the cascade, while over- and under-expression of MAPKK relative to this concentration have qualitatively different effects on the transient response of the cascade. Also of interest is the quite general observation that phase-plane representations of sensitivities in oscillating systems provide insights into the manner with which perturbations in the envelope of the oscillation result from small changes in initial concentrations of components of the oscillator. In addition to these applied analyses, we present an algorithm for the efficient computation of time-dependent sensitivities for Generalized Mass Action (GMA) systems, the most general of the canonical system representations of Biochemical Systems Theory (BST). The algorithm is shown to be comparable to, or better than, other methods of solution, as exemplified with three biochemical systems taken from the literature.

  11. Expansive soil-structure interaction and its sensitive analysis

    Institute of Scientific and Technical Information of China (English)

    XIAO Hong-bin; ZHANG Chun-shun; HE Jie; FAN Zhen-hui

    2007-01-01

    Several groups of direct shear tests of Nanning expansive soil samples were carried out by improved direct shear apparatus. The results of the characteristics of the ultimate shear stress and residual shear stress at the interface of expansive soil-structure are presented as follows: linear relation can approximately reflect changes between the both shear stress and the three factors: vertical load, water content and dry density, just different degrees from each other; increasing the vertical load from 25 kPa to 100 kPa (up by 300%) can cause the average increase of ultimate shear stress from 58% (for samples with 1.61 g/cm3) to 80% (for samples with 1.76 g/cm3), and an close average increase of 180% for the residual shear stress; increasing the water content from 14.1% to 20.8% (up by 47.5%) can cause the average decrease of the ultimate shear stress from 40% (for samples with 25 kPa) to 80% (for samples with 100 kPa), and the average decrease from 25% (for samples with 25 kPa) to 30% (for samples with 100 kPa)for the residual shear stress; increasing the dry density from 1.61 g/cm3 to 1.76 g/cm3 (up by 9.3%) can cause the average increase of ultimate shear stress from 92% (for samples with 25 kPa) to 138% (for samples with 100 kPa), and an average increase of 4% for the residual shear stress. Sensitive analysis was further made to explain reasons causing the differences of the both shear stress induced by the three factors.

  12. Sorption of redox-sensitive elements: critical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Strickert, R.G.

    1980-12-01

    The redox-sensitive elements (Tc, U, Np, Pu) discussed in this report are of interest to nuclear waste management due to their long-lived isotopes which have a potential radiotoxic effect on man. In their lower oxidation states these elements have been shown to be highly adsorbed by geologic materials occurring under reducing conditions. Experimental research conducted in recent years, especially through the Waste Isolation Safety Assessment Program (WISAP) and Waste/Rock Interaction Technology (WRIT) program, has provided extensive information on the mechanisms of retardation. In general, ion-exchange probably plays a minor role in the sorption behavior of cations of the above three actinide elements. Formation of anionic complexes of the oxidized states with common ligands (OH/sup -/, CO/sup - -//sub 3/) is expected to reduce adsorption by ion exchange further. Pertechnetate also exhibits little ion-exchange sorption by geologic media. In the reduced (IV) state, all of the elements are highly charged and it appears that they form a very insoluble compound (oxide, hydroxide, etc.) or undergo coprecipitation or are incorporated into minerals. The exact nature of the insoluble compounds and the effect of temperature, pH, pe, other chemical species, and other parameters are currently being investigated. Oxidation states other than Tc (IV,VII), U(IV,VI), Np(IV,V), and Pu(IV,V) are probably not important for the geologic repository environment expected, but should be considered especially when extreme conditions exist (radiation, temperature, etc.). Various experimental techniques such as oxidation-state analysis of tracer-level isotopes, redox potential measurement and control, pH measurement, and solid phase identification have been used to categorize the behavior of the various valence states.

  13. Sensitivity analysis for a type of statically stable sailcrafts

    Institute of Scientific and Technical Information of China (English)

    Zheng-Xue Li; Jun-Feng Li; He-Xi Baoyin

    2012-01-01

    Two types of sensitivities are proposed for statically stable sailcrafts. One type is the sensitivities of solar-radiation-pressure force with respect to position of the center of mass,and the other type is the sensitivities of solar-radiation-pressure force with respect to attitude.The two types of sensitivities represent how the solar-radiationpressure force changes with the position of mass center and the attitude.Sailcrafts with larger sensitivities undergo larger error of the solar-radiation-pressure force,leading to larger orbit error,as demonstrated by simulation.Then as a case study,detailed formulas are derived to calculate the sensitivities for sailcrafts with four triangular sails.According to these formulas,in order to reduce both types of sensitivities,the angle between opposed sails should not be too large,and the center of mass should be as close to the axis of symmetry of the four sails as possible and as far away from the center of pressure of the sailcraft as possible.

  14. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    Science.gov (United States)

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  15. SHORTCUT METHODS FOR SIMPLEX-BASED SENSITIVITY ANALYSIS OF LINEAR PROGRAMMING AND RELATED SOFTWARE ISSUE

    Directory of Open Access Journals (Sweden)

    Muwafaq M. Alkubaisi

    2017-03-01

    Full Text Available This paper has presented an overview of theoretical and methodological issues in simplex based sensitivity analysis (SA. The paper focuses somewhat on developing shortcut methods to perform Linear Programming (L.P. sensitivity analysis manually and in particular to dual prices and its meaning and changes in the parameter of the L.P model. Shortcut methods for conducting sensitivity analysis have been suggested. To perform sensitivity analysis in real life, one needs computer packages (software to achieve the sensitivity analysis report for higher accuracy and to save time. Some of these computer packages are very professional, but, unfortunately, some other packages suffer from logical errors in the programming of sensitivity analysis.

  16. Analysis of Sea Ice Cover Sensitivity in Global Climate Model

    Directory of Open Access Journals (Sweden)

    V. P. Parhomenko

    2014-01-01

    Full Text Available The paper presents joint calculations using a 3D atmospheric general circulation model, an ocean model, and a sea ice evolution model. The purpose of the work is to analyze a seasonal and annual evolution of sea ice, long-term variability of a model ice cover, and its sensitivity to some parameters of model as well to define atmosphere-ice-ocean interaction.Results of 100 years simulations of Arctic basin sea ice evolution are analyzed. There are significant (about 0.5 m inter-annual fluctuations of an ice cover.The ice - atmosphere sensible heat flux reduced by 10% leads to the growth of average sea ice thickness within the limits of 0.05 m – 0.1 m. However in separate spatial points the thickness decreases up to 0.5 m. An analysis of the seasonably changing average ice thickness with decreasing, as compared to the basic variant by 0.05 of clear sea ice albedo and that of snow shows the ice thickness reduction in a range from 0.2 m up to 0.6 m, and the change maximum falls for the summer season of intensive melting. The spatial distribution of ice thickness changes shows, that on the large part of the Arctic Ocean there was a reduction of ice thickness down to 1 m. However, there is also an area of some increase of the ice layer basically in a range up to 0.2 m (Beaufort Sea. The 0.05 decrease of sea ice snow albedo leads to reduction of average ice thickness approximately by 0.2 m, and this value slightly depends on a season. In the following experiment the ocean – ice thermal interaction influence on the ice cover is estimated. It is carried out by increase of a heat flux from ocean to the bottom surface of sea ice by 2 W/sq. m in comparison with base variant. The analysis demonstrates, that the average ice thickness reduces in a range from 0.2 m to 0.35 m. There are small seasonal changes of this value.The numerical experiments results have shown, that an ice cover and its seasonal evolution rather strongly depend on varied parameters

  17. Terricolous alpine lichens are sensitive to both load and concentration of applied nitrogen and have potential as bioindicators of nitrogen deposition

    Energy Technology Data Exchange (ETDEWEB)

    Britton, Andrea J., E-mail: a.britton@macaulay.ac.u [Macaulay Land Use Research Institute, Craigiebuckler, Aberdeen AB15 8QH (United Kingdom); Fisher, Julia M. [Macaulay Land Use Research Institute, Craigiebuckler, Aberdeen AB15 8QH (United Kingdom)

    2010-05-15

    The influence of applied nitrogen (N) concentration and load on thallus chemistry and growth of five terricolous alpine lichen species was investigated in a three-month N addition study. Thallus N content was influenced by both concentration and load; but the relative importance of these parameters varied between species. Growth was most affected by concentration. Thresholds for effects observed in this study support a low critical load for terricolous lichen communities (<7.5 kg N ha{sup -1} y{sup -1}) and suggest that concentrations of N currently encountered in UK cloudwater may have detrimental effects on the growth of sensitive species. The significance of N concentration effects on sensitive species also highlights the need to avoid artificially high concentrations when designing N addition experiments. Given the sensitivity of some species to extremely low loads and concentrations of N we suggest that terricolous lichens have potential as indicators of deposition and impact in northern and alpine ecosystems. - Terricolous lichen species' N content responds to both applied N concentration and load while applied N concentration has greatest effects on growth.

  18. PROBABILISTIC SENSITIVITY AND UNCERTAINTY ANALYSIS WORKSHOP SUMMARY REPORT

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, R

    2008-06-25

    Stochastic or probabilistic modeling approaches are being applied more frequently in the United States and globally to quantify uncertainty and enhance understanding of model response in performance assessments for disposal of radioactive waste. This increased use has resulted in global interest in sharing results of research and applied studies that have been completed to date. This technical report reflects the results of a workshop that was held to share results of research and applied work related to performance assessments conducted at United States Department of Energy sites. Key findings of this research and applied work are discussed and recommendations for future activities are provided.

  19. Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis

    Science.gov (United States)

    2006-01-01

    Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis by Mostafiz R. Chowdhury and Ala Tabiei ARL-TR-3703...Adelphi, MD 20783-1145 ARL-TR-3703 January 2006 Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis...GRANT NUMBER 4. TITLE AND SUBTITLE Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis 5c. PROGRAM

  20. Computational aspects of sensitivity calculations in linear transient structural analysis

    Science.gov (United States)

    Greene, W. H.; Haftka, R. T.

    1991-01-01

    The calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, and transient response problems is studied. Several existing sensitivity calculation methods and two new methods are compared for three example problems. Approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite model. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. This was found to result in poor convergence of stress sensitivities in several cases. Two semianalytical techniques are developed to overcome this poor convergence. Both new methods result in very good convergence of the stress sensitivities; the computational cost is much less than would result if the vibration modes were recalculated and then used in an overall finite difference method.