WorldWideScience

Sample records for surface sensitive analysis

  1. Sensitivity analysis of the surface water- groundwater interaction for the sandy area of the Netherlands

    NARCIS (Netherlands)

    Gomez del Campo, E.; Jousma, G.; Massop, H.T.L.

    1993-01-01

    The "Sensitivity Analysis of the Surface Water- Groundwater Interaction for the Sandy Area of the Netherlands" was carried out in the framework of a bilateral research project in support of the implementation of a nationwide geohydrological information system (REGIS) in the Netherlands. This

  2. A Monte Carlo/response surface strategy for sensitivity analysis: application to a dynamic model of vegetative plant growth

    Science.gov (United States)

    Lim, J. T.; Gold, H. J.; Wilkerson, G. G.; Raper, C. D. Jr; Raper CD, J. r. (Principal Investigator)

    1989-01-01

    We describe the application of a strategy for conducting a sensitivity analysis for a complex dynamic model. The procedure involves preliminary screening of parameter sensitivities by numerical estimation of linear sensitivity coefficients, followed by generation of a response surface based on Monte Carlo simulation. Application is to a physiological model of the vegetative growth of soybean plants. The analysis provides insights as to the relative importance of certain physiological processes in controlling plant growth. Advantages and disadvantages of the strategy are discussed.

  3. 3-D description of fracture surfaces and stress-sensitivity analysis for naturally fractured reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, S.Q.; Jioa, D.; Meng, Y.F.; Fan, Y.

    1997-08-01

    Three kinds of reservoir cores (limestone, sandstone, and shale with natural fractures) were used to study the effect of morphology of fracture surfaces on stress sensitivity. The cores, obtained from the reservoirs with depths of 2170 to 2300 m, have fractures which are mated on a large scale, but unmated on a fine scale. A specially designed photoelectric scanner with a computer was used to describe the topography of the fracture surfaces. Then, theoretical analysis of the fracture closure was carried out based on the fracture topography generated. The scanning results show that the asperity has almost normal distributions for all three types of samples. For the tested samples, the fracture closure predicted by the elastic-contact theory is different from the laboratory measurements because plastic deformation of the aspirates plays an important role under the testing range of normal stresses. In this work, the traditionally used elastic-contact theory has been modified to better predict the stress sensitivity of reservoir fractures. Analysis shows that the standard deviation of the probability density function of asperity distribution has a great effect on the fracture closure rate.

  4. Mitigating the surface urban heat island: Mechanism study and sensitivity analysis

    Science.gov (United States)

    Meng, Chunlei

    2017-08-01

    In a surface urban heat island (SUHI), the urban land surface temperature (LST) is usually higher than the temperature of the surrounding rural areas due to human activities and surface characteristics. Because a SUHI has many adverse impacts on urban environment and human health, SUHI mitigation strategies are very important. This paper investigates the mechanism of a SUHI based on the basic physical laws that control the formation of a SUHI; five mitigation strategies are proposed, namely: sprinkling and watering; paving a pervious surface; reducing the anthropogenic heat (AH) release; using a "white roof"; increasing the fractional vegetation cover or leaf area index (LAI). To quantify the effect of these mitigation strategies, 26 sets of experiments are designed and implemented by running the integrated urban land model (IUM). The results of the sensitivity analysis indicate that sprinkling and watering is an effective measure for mitigating a SUHI for an entire day. Decreasing the AH release is also useful for both night- and daytime SUHI mitigation; however, the cooling extent is proportional to the diurnal cycle of AH. Increasing the albedo can reduce the LST in the daytime, especially when the solar radiation is significant; the cooling extent is approximately proportional to the diurnal cycle of the net radiation. Increasing the pervious surface percentage can mitigate the SUHI especially in the daytime. Increasing the fractional vegetation cover can mitigate the SUHI in the daytime but may aggravate the SUHI at night.

  5. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Laxemar

    International Nuclear Information System (INIS)

    Aneljung, Maria; Sassner, Mona; Gustafsson, Lars-Goeran

    2007-11-01

    between measured and calculated surface water discharges, but the model generally underestimates the total runoff from the area. The model also overestimates the groundwater levels, and the modelled groundwater level amplitudes are too small in many boreholes. A number of likely or potential reasons for these deviations can be identified: The surface stream network description in the model is incomplete. This implies that too little overland water is drained from the area by the streams, which creates ponded areas in the model that do not exist in reality. These areas are characterized by large evaporation and infiltration, contributing to groundwater recharge and reducing transpiration from the groundwater table, in turn creating high and relatively stable groundwater levels compared to those measured at the site. In order to improve the agreement between measured and modelled surface water discharges, the evapotranspiration was reduced in the model; in effect, this implied a reduction of the potential evapotranspiration. This probably caused a larger groundwater recharge and less transpiration during summer, thereby reducing the variations in the modelled groundwater levels. If the MIKE 11 stream network is updated, the potential evapotranspiration could be increased again, such that the modelling of groundwater dynamics is improved. The bottom boundary condition and the hydraulic conductivity of the bedrock may have a large effect on model-calculated near-surface/surface water flows in Laxemar. A sensitivity analysis shows that lowering the hydraulic head at the bottom boundary (located at 150 metres below sea level) lowers the groundwater levels in the Quaternary deposits, but also implies smaller surface water discharges. Lowering the hydraulic conductivity of the bedrock would increase groundwater flows to Quaternary deposits in groundwater discharge areas, which raises groundwater levels and reduces fluctuation amplitudes. An alternative model approach, using a

  6. Fast and sensitive trace analysis of malachite green using a surface-enhanced Raman microfluidic sensor.

    Science.gov (United States)

    Lee, Sangyeop; Choi, Junghyun; Chen, Lingxin; Park, Byungchoon; Kyong, Jin Burm; Seong, Gi Hun; Choo, Jaebum; Lee, Yeonjung; Shin, Kyung-Hoon; Lee, Eun Kyu; Joo, Sang-Woo; Lee, Kyeong-Hee

    2007-05-08

    A rapid and highly sensitive trace analysis technique for determining malachite green (MG) in a polydimethylsiloxane (PDMS) microfluidic sensor was investigated using surface-enhanced Raman spectroscopy (SERS). A zigzag-shaped PDMS microfluidic channel was fabricated for efficient mixing between MG analytes and aggregated silver colloids. Under the optimal condition of flow velocity, MG molecules were effectively adsorbed onto silver nanoparticles while flowing along the upper and lower zigzag-shaped PDMS channel. A quantitative analysis of MG was performed based on the measured peak height at 1615 cm(-1) in its SERS spectrum. The limit of detection, using the SERS microfluidic sensor, was found to be below the 1-2 ppb level and this low detection limit is comparable to the result of the LC-Mass detection method. In the present study, we introduce a new conceptual detection technology, using a SERS microfluidic sensor, for the highly sensitive trace analysis of MG in water.

  7. Parameter Estimation and Sensitivity Analysis of an Urban Surface Energy Balance Parameterization at a Tropical Suburban Site

    Science.gov (United States)

    Harshan, S.; Roth, M.; Velasco, E.

    2014-12-01

    Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model

  8. On understanding the relationship between structure in the potential surface and observables in classical dynamics: A functional sensitivity analysis approach

    International Nuclear Information System (INIS)

    Judson, R.S.; Rabitz, H.

    1987-01-01

    The relationship between structure in the potential surface and classical mechanical observables is examined by means of functional sensitivity analysis. Functional sensitivities provide maps of the potential surface, highlighting those regions that play the greatest role in determining the behavior of observables. A set of differential equations for the sensitivities of the trajectory components are derived. These are then solved using a Green's function method. It is found that the sensitivities become singular at the trajectory turning points with the singularities going as eta -3 /sup // 2 , with eta being the distance from the nearest turning point. The sensitivities are zero outside of the energetically and dynamically allowed region of phase space. A second set of equations is derived from which the sensitivities of observables can be directly calculated. An adjoint Green's function technique is employed, providing an efficient method for numerically calculating these quantities. Sensitivity maps are presented for a simple collinear atom--diatom inelastic scattering problem and for two Henon--Heiles type Hamiltonians modeling

  9. Sensitivity analysis of the surface water- groundwater interaction for the sandy area of the Netherlands

    OpenAIRE

    Gomez del Campo, E.; Jousma, G.; Massop, H.T.L.

    1993-01-01

    The "Sensitivity Analysis of the Surface Water- Groundwater Interaction for the Sandy Area of the Netherlands" was carried out in the framework of a bilateral research project in support of the implementation of a nationwide geohydrological information system (REGIS) in the Netherlands. This project, conducted in cooperation between the TNO Institute for Applied Scientific Research (IGG-TNO) and !he Winand Staring Centre for Integrated Land, Soil and Water Research (SC-DLO), is aimed at defin...

  10. Sensitivity analysis for near-surface disposal in argillaceous media using NAMMU-HYROCOIN Level 3-Test case 1

    International Nuclear Information System (INIS)

    Miller, D.R.; Paige, R.W.

    1988-07-01

    HYDROCOIN is an international project for comparing groundwater flow models and modelling strategies. Level 3 of the project concerns the application of groundwater flow models to repository performance assessment with emphasis on the treatment of sensitivity and uncertainty in models and data. Level 3, test case 1 concerns sensitivity analysis of the groundwater flow around a radioactive waste repository situated in a near surface argillaceous formation. Work on this test case has been carried out by Harwell and will be reported in full in the near future. This report presents the results obtained using the computer program NAMMU. (author)

  11. Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae [NESS, Daejeon (Korea, Republic of)

    2016-10-15

    Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed.

  12. Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes

    International Nuclear Information System (INIS)

    Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae

    2016-01-01

    Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed

  13. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    Aneljung, Maria; Sassner, Mona; Gustafsson, Lars-Goeran (DHI Sverige AB, Lilla Bommen 1, SE-411 04 Goeteborg (Sweden))

    2007-11-15

    between measured and calculated surface water discharges, but the model generally underestimates the total runoff from the area. The model also overestimates the groundwater levels, and the modelled groundwater level amplitudes are too small in many boreholes. A number of likely or potential reasons for these deviations can be identified: The surface stream network description in the model is incomplete. This implies that too little overland water is drained from the area by the streams, which creates ponded areas in the model that do not exist in reality. These areas are characterized by large evaporation and infiltration, contributing to groundwater recharge and reducing transpiration from the groundwater table, in turn creating high and relatively stable groundwater levels compared to those measured at the site. In order to improve the agreement between measured and modelled surface water discharges, the evapotranspiration was reduced in the model; in effect, this implied a reduction of the potential evapotranspiration. This probably caused a larger groundwater recharge and less transpiration during summer, thereby reducing the variations in the modelled groundwater levels. If the MIKE 11 stream network is updated, the potential evapotranspiration could be increased again, such that the modelling of groundwater dynamics is improved. The bottom boundary condition and the hydraulic conductivity of the bedrock may have a large effect on model-calculated near-surface/surface water flows in Laxemar. A sensitivity analysis shows that lowering the hydraulic head at the bottom boundary (located at 150 metres below sea level) lowers the groundwater levels in the Quaternary deposits, but also implies smaller surface water discharges. Lowering the hydraulic conductivity of the bedrock would increase groundwater flows to Quaternary deposits in groundwater discharge areas, which raises groundwater levels and reduces fluctuation amplitudes. An alternative model approach, using a

  14. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Forsmark

    International Nuclear Information System (INIS)

    Aneljung, Maria; Gustafsson, Lars-Goeran

    2007-04-01

    . Differences in the aquifer refilling process subsequent to dry periods, for example a too slow refill when the groundwater table rises after dry summers. This may be due to local deviations in the applied pF-curves in the unsaturated zone description. Differences in near-surface groundwater elevations. For example, the calculated groundwater level reaches the ground surface during the fall and spring at locations where the measured groundwater depth is just below the ground surface. This may be due to the presence of near-surface high-conductive layers. A sensitivity analysis has been made on calibration parameters. For parameters that have 'global' effects, such as the hydraulic conductivity in the saturated zone, the analysis was performed using the 'full' model. For parameters with more local effects, such as parameters influencing the evapotranspiration and the net recharge, the model was scaled down to a column model, representing two different type areas. The most important conclusions that can be drawn from the sensitivity analysis are the following: The results indicate that the horizontal hydraulic conductivity generally should be increased at topographic highs, and reduced at local depressions in the topography. The results indicate that no changes should be made to the vertical hydraulic conductivity at locations where the horizontal conductivity has been increased, and that the vertical conductivity generally should be decreased where the horizontal conductivity has been decreased. The vegetation parameters that have the largest influence on the total groundwater recharge are the root mass distribution and the crop coefficient. The unsaturated zone parameter that have the largest influence on the total groundwater recharge is the effective porosity given in the pF-curve. In addition, the shape of the pF-curve above the water content at field capacity is also of great importance. The general conclusion is that the surrounding conditions have large effects on water

  15. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Forsmark

    Energy Technology Data Exchange (ETDEWEB)

    Aneljung, Maria; Gustafsson, Lars-Goeran [DHI Water and Environment AB, Goeteborg (Sweden)

    2007-04-15

    . Differences in the aquifer refilling process subsequent to dry periods, for example a too slow refill when the groundwater table rises after dry summers. This may be due to local deviations in the applied pF-curves in the unsaturated zone description. Differences in near-surface groundwater elevations. For example, the calculated groundwater level reaches the ground surface during the fall and spring at locations where the measured groundwater depth is just below the ground surface. This may be due to the presence of near-surface high-conductive layers. A sensitivity analysis has been made on calibration parameters. For parameters that have 'global' effects, such as the hydraulic conductivity in the saturated zone, the analysis was performed using the 'full' model. For parameters with more local effects, such as parameters influencing the evapotranspiration and the net recharge, the model was scaled down to a column model, representing two different type areas. The most important conclusions that can be drawn from the sensitivity analysis are the following: The results indicate that the horizontal hydraulic conductivity generally should be increased at topographic highs, and reduced at local depressions in the topography. The results indicate that no changes should be made to the vertical hydraulic conductivity at locations where the horizontal conductivity has been increased, and that the vertical conductivity generally should be decreased where the horizontal conductivity has been decreased. The vegetation parameters that have the largest influence on the total groundwater recharge are the root mass distribution and the crop coefficient. The unsaturated zone parameter that have the largest influence on the total groundwater recharge is the effective porosity given in the pF-curve. In addition, the shape of the pF-curve above the water content at field capacity is also of great importance. The general conclusion is that the surrounding conditions have

  16. Sensitivity analysis techniques applied to a system of hyperbolic conservation laws

    International Nuclear Information System (INIS)

    Weirs, V. Gregory; Kamm, James R.; Swiler, Laura P.; Tarantola, Stefano; Ratto, Marco; Adams, Brian M.; Rider, William J.; Eldred, Michael S.

    2012-01-01

    Sensitivity analysis is comprised of techniques to quantify the effects of the input variables on a set of outputs. In particular, sensitivity indices can be used to infer which input parameters most significantly affect the results of a computational model. With continually increasing computing power, sensitivity analysis has become an important technique by which to understand the behavior of large-scale computer simulations. Many sensitivity analysis methods rely on sampling from distributions of the inputs. Such sampling-based methods can be computationally expensive, requiring many evaluations of the simulation; in this case, the Sobol' method provides an easy and accurate way to compute variance-based measures, provided a sufficient number of model evaluations are available. As an alternative, meta-modeling approaches have been devised to approximate the response surface and estimate various measures of sensitivity. In this work, we consider a variety of sensitivity analysis methods, including different sampling strategies, different meta-models, and different ways of evaluating variance-based sensitivity indices. The problem we consider is the 1-D Riemann problem. By a careful choice of inputs, discontinuous solutions are obtained, leading to discontinuous response surfaces; such surfaces can be particularly problematic for meta-modeling approaches. The goal of this study is to compare the estimated sensitivity indices with exact values and to evaluate the convergence of these estimates with increasing samples sizes and under an increasing number of meta-model evaluations. - Highlights: ► Sensitivity analysis techniques for a model shock physics problem are compared. ► The model problem and the sensitivity analysis problem have exact solutions. ► Subtle details of the method for computing sensitivity indices can affect the results.

  17. Sensitivity of surface meteorological analyses to observation networks

    Science.gov (United States)

    Tyndall, Daniel Paul

    A computationally efficient variational analysis system for two-dimensional meteorological fields is developed and described. This analysis approach is most efficient when the number of analysis grid points is much larger than the number of available observations, such as for large domain mesoscale analyses. The analysis system is developed using MATLAB software and can take advantage of multiple processors or processor cores. A version of the analysis system has been exported as a platform independent application (i.e., can be run on Windows, Linux, or Macintosh OS X desktop computers without a MATLAB license) with input/output operations handled by commonly available internet software combined with data archives at the University of Utah. The impact of observation networks on the meteorological analyses is assessed by utilizing a percentile ranking of individual observation sensitivity and impact, which is computed by using the adjoint of the variational surface assimilation system. This methodology is demonstrated using a case study of the analysis from 1400 UTC 27 October 2010 over the entire contiguous United States domain. The sensitivity of this approach to the dependence of the background error covariance on observation density is examined. Observation sensitivity and impact provide insight on the influence of observations from heterogeneous observing networks as well as serve as objective metrics for quality control procedures that may help to identify stations with significant siting, reporting, or representativeness issues.

  18. Sensitivity analysis of a low-level waste environmental transport code

    International Nuclear Information System (INIS)

    Hiromoto, G.

    1989-01-01

    Results are presented from a sensivity analysis of a computer code designed to simulate the environmental transport of radionuclides buried at shallow land waste repositories. A sensitivity analysis methodology, based on the surface response replacement and statistic sensitivity estimators, was developed to address the relative importance of the input parameters on the model output. Response surface replacement for the model was constructed by stepwise regression, after sampling input vectors from range and distribution of the input variables, and running the code to generate the associated output data. Sensitivity estimators were compute using the partial rank correlation coefficients and the standardized rank regression coefficients. The results showed that the tecniques employed in this work provides a feasible means to perform a sensitivity analysis of a general not-linear environmental radionuclides transport models. (author) [pt

  19. Sensitivity Analysis of the Surface Runoff Coefficient of HiPIMS in Simulating Flood Processes in a Large Basin

    Directory of Open Access Journals (Sweden)

    Yueling Wang

    2018-03-01

    Full Text Available To simulate flood processes at the basin level, the GPU-based High-Performance Integrated Hydrodynamic Modelling System (HiPIMS is gaining interest as computational capability increases. However, the difficulty of coping with rainfall input to HiPIMS reduces the possibility of acquiring a satisfactory simulation accuracy. The objective of this study is to test the sensitivity of the surface runoff coefficient in the HiPIMS source term in the Misai basin with an area of 797 km2 in south China. To achieve this, the basin was divided into 909,824 grid cells, to each of which a Manning coefficient was assigned based on its land use type interpreted from remote sensing data. A sensitivity analysis was conducted for three typical flood processes under four types of surface runoff coefficients, assumed a priori, upon three error functions. The results demonstrate the crucial role of the surface runoff coefficient in achieving better simulation accuracy and reveal that this coefficient varies with flood scale and is unevenly distributed over the basin.

  20. Adaptation of an urban land surface model to a tropical suburban area: Offline evaluation, sensitivity analysis, and optimization of TEB/ISBA (SURFEX)

    Science.gov (United States)

    Harshan, Suraj

    The main objective of the present thesis is the improvement of the TEB/ISBA (SURFEX) urban land surface model (ULSM) through comprehensive evaluation, sensitivity analysis, and optimization experiments using energy balance and radiative and air temperature data observed during 11 months at a tropical sub-urban site in Singapore. Overall the performance of the model is satisfactory, with a small underestimation of net radiation and an overestimation of sensible heat flux. Weaknesses in predicting the latent heat flux are apparent with smaller model values during daytime and the model also significantly underpredicts both the daytime peak and nighttime storage heat. Surface temperatures of all facets are generally overpredicted. Significant variation exists in the model behaviour between dry and wet seasons. The vegetation parametrization used in the model is inadequate to represent the moisture dynamics, producing unrealistically low latent heat fluxes during a particularly dry period. The comprehensive evaluation of the USLM shows the need for accurate estimation of input parameter values for present site. Since obtaining many of these parameters through empirical methods is not feasible, the present study employed a two step approach aimed at providing information about the most sensitive parameters and an optimized parameter set from model calibration. Two well established sensitivity analysis methods (global: Sobol and local: Morris) and a state-of-the-art multiobjective evolutionary algorithm (Borg) were employed for sensitivity analysis and parameter estimation. Experiments were carried out for three different weather periods. The analysis indicates that roof related parameters are the most important ones in controlling the behaviour of the sensible heat flux and net radiation flux, with roof and road albedo as the most influential parameters. Soil moisture initialization parameters are important in controlling the latent heat flux. The built (town) fraction

  1. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    Science.gov (United States)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  2. Sensitivity of Rayleigh wave ellipticity and implications for surface wave inversion

    Science.gov (United States)

    Cercato, Michele

    2018-04-01

    The use of Rayleigh wave ellipticity has gained increasing popularity in recent years for investigating earth structures, especially for near-surface soil characterization. In spite of its widespread application, the sensitivity of the ellipticity function to the soil structure has been rarely explored in a comprehensive and systematic manner. To this end, a new analytical method is presented for computing the sensitivity of Rayleigh wave ellipticity with respect to the structural parameters of a layered elastic half-space. This method takes advantage of the minor decomposition of the surface wave eigenproblem and is numerically stable at high frequency. This numerical procedure allowed to retrieve the sensitivity for typical near surface and crustal geological scenarios, pointing out the key parameters for ellipticity interpretation under different circumstances. On this basis, a thorough analysis is performed to assess how ellipticity data can efficiently complement surface wave dispersion information in a joint inversion algorithm. The results of synthetic and real-world examples are illustrated to analyse quantitatively the diagnostic potential of the ellipticity data with respect to the soil structure, focusing on the possible sources of misinterpretation in data inversion.

  3. Sensitivity analysis

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...

  4. Land surface temperature downscaling using random forest regression: primary result and sensitivity analysis

    Science.gov (United States)

    Pan, Xin; Cao, Chen; Yang, Yingbao; Li, Xiaolong; Shan, Liangliang; Zhu, Xi

    2018-04-01

    The land surface temperature (LST) derived from thermal infrared satellite images is a meaningful variable in many remote sensing applications. However, at present, the spatial resolution of the satellite thermal infrared remote sensing sensor is coarser, which cannot meet the needs. In this study, LST image was downscaled by a random forest model between LST and multiple predictors in an arid region with an oasis-desert ecotone. The proposed downscaling approach was evaluated using LST derived from the MODIS LST product of Zhangye City in Heihe Basin. The primary result of LST downscaling has been shown that the distribution of downscaled LST matched with that of the ecosystem of oasis and desert. By the way of sensitivity analysis, the most sensitive factors to LST downscaling were modified normalized difference water index (MNDWI)/normalized multi-band drought index (NMDI), soil adjusted vegetation index (SAVI)/ shortwave infrared reflectance (SWIR)/normalized difference vegetation index (NDVI), normalized difference building index (NDBI)/SAVI and SWIR/NDBI/MNDWI/NDWI for the region of water, vegetation, building and desert, with LST variation (at most) of 0.20/-0.22 K, 0.92/0.62/0.46 K, 0.28/-0.29 K and 3.87/-1.53/-0.64/-0.25 K in the situation of +/-0.02 predictor perturbances, respectively.

  5. Response surface methodology for sensitivity and uncertainty analysis: performance and perspectives

    International Nuclear Information System (INIS)

    Olivi, L.; Brunelli, F.; Cacciabue, P.C.; Parisi, P.

    1985-01-01

    Two main aspects have to be taken into account in studying a nuclear accident scenario when using nuclear safety codes as an information source. The first one concerns the behavior of the code response and the set of assumptions to be introduced for its modelling. The second one is connected with the uncertainty features of the code input, often modelled as a probability density function (pdf). The analyst can apply two well-defined approaches depending on whether he wants major emphasis put on either of the aspects. Response Surface Methodology uses polynomial and inverse polynomial models together with the theory of experimental design, expressly developed for the identification procedure. It constitutes a well-established body of techniques able to cover a wide spectrum of requirements, when the first aspect plays the crucial role in the definition of the objectives. Other techniques such as Latin hypercube sampling, stratified sampling or even random sampling can fit better, when the second aspect affects the reliability of the analysis. The ultimate goal for both approaches is the selection of the variable, i.e. the identification of the code input variables most effective on the output and the uncertainty propagation, i.e. the assessment of the pdf to be attributed to the code response. The main aim of this work is to present a sensitivity analysis method, already tested on a real case, sufficiently flexible to be applied in both approaches mentioned

  6. Temperature sensitive surfaces and methods of making same

    Science.gov (United States)

    Liang, Liang [Richland, WA; Rieke, Peter C [Pasco, WA; Alford, Kentin L [Pasco, WA

    2002-09-10

    Poly-n-isopropylacrylamide surface coatings demonstrate the useful property of being able to switch charateristics depending upon temperature. More specifically, these coatings switch from being hydrophilic at low temperature to hydrophobic at high temperature. Research has been conducted for many years to better characterize and control the properties of temperature sensitive coatings. The present invention provides novel temperature sensitive coatings on articles and novel methods of making temperature sensitive coatings that are disposed on the surfaces of various articles. These novel coatings contain the reaction products of n-isopropylacrylamide and are characterized by their properties such as advancing contact angles. Numerous other characteristics such as coating thickness, surface roughness, and hydrophilic-to-hydrophobic transition temperatures are also described. The present invention includes articles having temperature-sensitve coatings with improved properties as well as improved methods for forming temperature sensitive coatings.

  7. Surface sensitization mechanism on negative electron affinity p-GaN nanowires

    Science.gov (United States)

    Diao, Yu; Liu, Lei; Xia, Sihao; Feng, Shu; Lu, Feifei

    2018-03-01

    The surface sensitization is the key to prepare negative electron affinity photocathode. The thesis emphasizes on the study of surface sensitization mechanism of p-type doping GaN nanowires utilizing first principles based on density function theory. The adsorption energy, work function, dipole moment, geometry structure, electronic structure and optical properties of Mg-doped GaN nanowires surfaces with various coverages of Cs atoms are investigated. The GaN nanowire with Mg doped in core position is taken as the sensitization base. At the initial stage of sensitization, the best adsorption site for Cs atom on GaN nanowire surface is BN, the bridge site of two adjacent N atoms. Surface sensitization generates a p-type internal surface with an n-type surface state, introducing a band bending region which can help reduce surface barrier and work function. With increasing Cs coverage, work functions decrease monotonously and the "Cs-kill" phenomenon disappears. For Cs coverage of 0.75 ML and 1 ML, the corresponding sensitization systems reach negative electron affinity state. Through surface sensitization, the absorption curves are red shifted and the absorption coefficient is cut down. All theoretical calculations can guide the design of negative electron affinity Mg doped GaN nanowires photocathode.

  8. Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS

    International Nuclear Information System (INIS)

    Brown, C.S.; Zhang, Hongbin

    2016-01-01

    VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis. A 2 × 2 fuel assembly model was developed and simulated by VERA-CS, and uncertainty quantification and sensitivity analysis were performed with fourteen uncertain input parameters. The minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surface temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. Parameters used as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.

  9. Space-Based Diagnosis of Surface Ozone Sensitivity to Anthropogenic Emissions

    Science.gov (United States)

    Martin, Randall V.; Fiore, Arlene M.; VanDonkelaar, Aaron

    2004-01-01

    We present a novel capability in satellite remote sensing with implications for air pollution control strategy. We show that the ratio of formaldehyde columns to tropospheric nitrogen dioxide columns is an indicator of the relative sensitivity of surface ozone to emissions of nitrogen oxides (NO(x) = NO + NO2) and volatile organic compounds (VOCs). The diagnosis from these space-based observations is highly consistent with current understanding of surface ozone chemistry based on in situ observations. The satellite-derived ratios indicate that surface ozone is more sensitive to emissions of NO(x) than of VOCs throughout most continental regions of the Northern Hemisphere during summer. Exceptions include Los Angeles and industrial areas of Germany. A seasonal transition occurs in the fall when surface ozone becomes less sensitive to NOx and more sensitive to VOCs.

  10. THE EFFECT OF BONDING AND SURFACE SEALANT APPLICATION ON POSTOPERATIVE SENSITIVITY FROM POSTERIOR COMPOSITES

    Directory of Open Access Journals (Sweden)

    Neslihan TEKÇE

    2015-10-01

    Full Text Available Purpose: The purpose of the study was to evaluate the postoperative sensitivity of posterior Class I composite restoration at short-term, restorated with two different all-in-one self-etch adhesives with or without surface sealant application. Materials and Methods: 44 restorations were inserted in 11 patients who required Class I restorations in their molars. Each patient received 4 restorations, thus four groups were formed; (1 G-Aenial Bond (GC, Japan; (2 Clearfil S3 Bond (Kuraray, Japan; (3 G-Aenial Bond+Fortify Plus (Bisco, USA, (4 Clearfil S3 Bond+Fortify Plus. Sensitivity was evaluated at 24h, 7, 15, and 30 days using cold air, ice, and pressure stimuli using a visual analog scale. Comparisons of continuous variables between the sensitivity evaluations were performed using the Friedman’s One-Way Analysis of Variance with repeated measures test (p0.05. The use of Clearfil S3 Bond resulted in almost the same level of postoperative sensitivity as did the use of G-Aenial Bond. The highest sensitivity scores were observed for the surface sealant applied teeth without any statistical significance (p>0.05. Conclusions: Self etch adhesives displayed postoperative sensitivity. The sensitivity scores slightly decreased at the end of 30 days (p>0.05. Surface sealant application did not result in a decrease in sensitivity scores for either dentin adhesives.

  11. Surface plasmon optics for biosensors with advanced sensitivity and throughput

    International Nuclear Information System (INIS)

    Toma, M.

    2012-01-01

    Plasmonic biosensors represent a rapidly advancing technology which enables rapid and sensitive analysis of target analytes. This thesis focuses on novel metallic and polymer structures for plasmonic biosensors based on surface plasmon resonance (SPR) and surface plasmon-enhanced fluorescence (SPF). It comprises four projects addressing key challenges concerning the enhancement of sensitivity and throughput. In the project 1, an advanced optical platform is developed which relies on reference-compensated angular spectroscopy of hydrogel-guided waves. The developed optical setup provides superior refractive index resolution of 1.2×10 -7 RIU and offers an attractive platform for direct detection of small analytes which cannot be analyzed by regular SPR biosensors. The project 2 carries out theoretical study of SPR imaging with advanced lateral resolution by utilizing Bragg scattered surface plasmons (BSSPs) on sub-wavelength metallic gratings. The results reveal that the proposed concept provides better lateral resolution and fidelity of the images. This feature opens ways for high-throughput SPR biosensors with denser arrays of sensing spots. The project 3 investigates surface plasmon coupled-emission from fluorophores in the vicinity of plasmonic Bragg-gratings. The experimental results provide leads on advancing the collection efficiency of fluorescence light by controlling the directions of fluorescence emission. This functionality can directly improve the sensitivity of fluorescence-based assays. In the last project 4, a novel sensing scheme with actively tuneable plasmonic structures is developed by employing thermo-responsive hydrogel binding matrix. The hydrogel film simultaneously serves as a large capacity binding matrix and provides means for actuating of surface plasmons through reversible swelling and collapsing of the hydrogel. This characteristic is suitable for multiplexing of sensing channels in fluorescence-based biosensor scheme (author)

  12. The surface analysis methods

    International Nuclear Information System (INIS)

    Deville, J.P.

    1998-01-01

    Nowadays, there are a lot of surfaces analysis methods, each having its specificity, its qualities, its constraints (for instance vacuum) and its limits. Expensive in time and in investment, these methods have to be used deliberately. This article appeals to non specialists. It gives some elements of choice according to the studied information, the sensitivity, the use constraints or the answer to a precise question. After having recalled the fundamental principles which govern these analysis methods, based on the interaction between radiations (ultraviolet, X) or particles (ions, electrons) with matter, two methods will be more particularly described: the Auger electron spectroscopy (AES) and x-rays photoemission spectroscopy (ESCA or XPS). Indeed, they are the most widespread methods in laboratories, the easier for use and probably the most productive for the analysis of surface of industrial materials or samples submitted to treatments in aggressive media. (O.M.)

  13. Response surfaces and sensitivity analyses for an environmental model of dose calculations

    Energy Technology Data Exchange (ETDEWEB)

    Iooss, Bertrand [CEA Cadarache, DEN/DER/SESI/LCFR, 13108 Saint Paul lez Durance, Cedex (France)]. E-mail: bertrand.iooss@cea.fr; Van Dorpe, Francois [CEA Cadarache, DEN/DTN/SMTM/LMTE, 13108 Saint Paul lez Durance, Cedex (France); Devictor, Nicolas [CEA Cadarache, DEN/DER/SESI/LCFR, 13108 Saint Paul lez Durance, Cedex (France)

    2006-10-15

    A parametric sensitivity analysis is carried out on GASCON, a radiological impact software describing the radionuclides transfer to the man following a chronic gas release of a nuclear facility. An effective dose received by age group can thus be calculated according to a specific radionuclide and to the duration of the release. In this study, we are concerned by 18 output variables, each depending of approximately 50 uncertain input parameters. First, the generation of 1000 Monte-Carlo simulations allows us to calculate correlation coefficients between input parameters and output variables, which give a first overview of important factors. Response surfaces are then constructed in polynomial form, and used to predict system responses at reduced computation time cost; this response surface will be very useful for global sensitivity analysis where thousands of runs are required. Using the response surfaces, we calculate the total sensitivity indices of Sobol by the Monte-Carlo method. We demonstrate the application of this method to one site of study and to one reference group near the nuclear research Center of Cadarache (France), for two radionuclides: iodine 129 and uranium 238. It is thus shown that the most influential parameters are all related to the food chain of the goat's milk, in decreasing order of importance: dose coefficient 'effective ingestion', goat's milk ration of the individuals of the reference group, grass ration of the goat, dry deposition velocity and transfer factor to the goat's milk.

  14. Sensitivity and uncertainty analysis

    CERN Document Server

    Cacuci, Dan G; Navon, Ionel Michael

    2005-01-01

    As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c

  15. Reliability and Sensitivity Analysis for Laminated Composite Plate Using Response Surface Method

    International Nuclear Information System (INIS)

    Lee, Seokje; Kim, Ingul; Jang, Moonho; Kim, Jaeki; Moon, Jungwon

    2013-01-01

    Advanced fiber-reinforced laminated composites are widely used in various fields of engineering to reduce weight. The material property of each ply is well known; specifically, it is known that ply is less reliable than metallic materials and very sensitive to the loading direction. Therefore, it is important to consider this uncertainty in the design of laminated composites. In this study, reliability analysis is conducted using Callosum and Meatball interactions for a laminated composite plate for the case in which the tip deflection is the design requirement and the material property is a random variable. Furthermore, the efficiency and accuracy of the approximation method is identified, and a probabilistic sensitivity analysis is conducted. As a result, we can prove the applicability of the advanced design method for the stabilizer of an underwater vehicle

  16. Reliability and Sensitivity Analysis for Laminated Composite Plate Using Response Surface Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seokje; Kim, Ingul [Chungnam National Univ., Daejeon (Korea, Republic of); Jang, Moonho; Kim, Jaeki; Moon, Jungwon [LIG Nex1, Yongin (Korea, Republic of)

    2013-04-15

    Advanced fiber-reinforced laminated composites are widely used in various fields of engineering to reduce weight. The material property of each ply is well known; specifically, it is known that ply is less reliable than metallic materials and very sensitive to the loading direction. Therefore, it is important to consider this uncertainty in the design of laminated composites. In this study, reliability analysis is conducted using Callosum and Meatball interactions for a laminated composite plate for the case in which the tip deflection is the design requirement and the material property is a random variable. Furthermore, the efficiency and accuracy of the approximation method is identified, and a probabilistic sensitivity analysis is conducted. As a result, we can prove the applicability of the advanced design method for the stabilizer of an underwater vehicle.

  17. Sensitivity Analysis of features in tolerancing based on constraint function level sets

    International Nuclear Information System (INIS)

    Ziegler, Philipp; Wartzack, Sandro

    2015-01-01

    Usually, the geometry of the manufactured product inherently varies from the nominal geometry. This may negatively affect the product functions and properties (such as quality and reliability), as well as the assemblability of the single components. In order to avoid this, the geometric variation of these component surfaces and associated geometry elements (like hole axes) are restricted by tolerances. Since tighter tolerances lead to significant higher manufacturing costs, tolerances should be specified carefully. Therefore, the impact of deviating component surfaces on functions, properties and assemblability of the product has to be analyzed. As physical experiments are expensive, methods of statistical tolerance analysis tools are widely used in engineering design. Current tolerance simulation tools lack of an appropriate indicator for the impact of deviating component surfaces. In the adoption of Sensitivity Analysis methods, there are several challenges, which arise from the specific framework in tolerancing. This paper presents an approach to adopt Sensitivity Analysis methods on current tolerance simulations with an interface module, which bases on level sets of constraint functions for parameters of the simulation model. The paper is an extension and generalization of Ziegler and Wartzack [1]. Mathematical properties of the constraint functions (convexity, homogeneity), which are important for the computational costs of the Sensitivity Analysis, are shown. The practical use of the method is illustrated in a case study of a plain bearing. - Highlights: • Alternative definition of Deviation Domains. • Proof of mathematical properties of the Deviation Domains. • Definition of the interface between Deviation Domains and Sensitivity Analysis. • Sensitivity analysis of a gearbox to show the methods practical use

  18. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  19. Ion induced optical emission for surface and depth profile analysis

    International Nuclear Information System (INIS)

    White, C.W.

    1977-01-01

    Low-energy ion bombardment of solid surfaces results in the emission of infrared, visible, and ultraviolet radiation produced by inelastic ion-solid collision processes. The emitted optical radiation provides important insight into low-energy particle-solid interactions and provides the basis for an analysis technique which can be used for surface and depth profile analysis with high sensitivity. The different kinds of collision induced optical radiation emitted as a result of low-energy particle-solid collisions are reviewed. Line radiation arising from excited states of sputtered atoms or molecules is shown to provide the basis for surface and depth profile analysis. The spectral characteristics of this type of radiation are discussed and applications of the ion induced optical emission technique are presented. These applications include measurements of ion implant profiles, detection sensitivities for submonolayer quantities of impurities on elemental surfaces, and the detection of elemental impurities on complex organic substrates

  20. WHAT IF (Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Iulian N. BUJOREANU

    2011-01-01

    Full Text Available Sensitivity analysis represents such a well known and deeply analyzed subject that anyone to enter the field feels like not being able to add anything new. Still, there are so many facets to be taken into consideration.The paper introduces the reader to the various ways sensitivity analysis is implemented and the reasons for which it has to be implemented in most analyses in the decision making processes. Risk analysis is of outmost importance in dealing with resource allocation and is presented at the beginning of the paper as the initial cause to implement sensitivity analysis. Different views and approaches are added during the discussion about sensitivity analysis so that the reader develops an as thoroughly as possible opinion on the use and UTILITY of the sensitivity analysis. Finally, a round-up conclusion brings us to the question of the possibility of generating the future and analyzing it before it unfolds so that, when it happens it brings less uncertainty.

  1. Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks

    Directory of Open Access Journals (Sweden)

    Harry R. Millwater

    2006-01-01

    Full Text Available A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel and common damage mechanisms (inherent defects or surface damage can be considered. The derivation is developed for Monte Carlo sampling such that the existing failure samples are used and the sensitivities are obtained with minimal additional computational time. Variance estimates and confidence bounds of the sensitivity estimates are developed. The methodology is demonstrated and verified using a multizone probabilistic fatigue analysis of a gas turbine compressor disk analysis considering stress scatter, crack growth propagation scatter, and initial crack size as random variables.

  2. Surface plasmon resonance biosensors for highly sensitive detection in real samples

    Science.gov (United States)

    Sepúlveda, B.; Carrascosa, L. G.; Regatos, D.; Otte, M. A.; Fariña, D.; Lechuga, L. M.

    2009-08-01

    In this work we summarize the main results obtained with the portable surface plasmon resonance (SPR) device developed in our group (commercialised by SENSIA, SL, Spain), highlighting its applicability for the real-time detection of extremely low concentrations of toxic pesticides in environmental water samples. In addition, we show applications in clinical diagnosis as, on the one hand, the real-time and label-free detection of DNA hybridization and single point mutations at the gene BRCA-1, related to the predisposition in women to develop an inherited breast cancer and, on the other hand, the analysis of protein biomarkers in biological samples (urine, serum) for early detection of diseases. Despite the large number of applications already proven, the SPR technology has two main drawbacks: (i) not enough sensitivity for some specific applications (where pM-fM or single-molecule detection are needed) (ii) low multiplexing capabilities. In order solve such drawbacks, we work in several alternative configurations as the Magneto-optical Surface Plasmon Resonance sensor (MOSPR) based on a combination of magnetooptical and ferromagnetic materials, to improve the SPR sensitivity, or the Localized Surface Plasmon Resonance (LSPR) based on nanostructures (nanoparticles, nanoholes,...), for higher multiplexing capabilities.

  3. Surface sensitivity of nuclear-knock-out form factors

    International Nuclear Information System (INIS)

    Fratamico, G.

    1984-01-01

    A numerical calculation has been performed to investigate the sensitivity of nuclear-knock-out form factors to nuclear-surface behaviour of bound-state wave functions. The result of our investigation suggests that one can extract the bound-state behaviour at the surface from experimental information on nuclear-knock-out form factors

  4. Sensitivity functions for uncertainty analysis: Sensitivity and uncertainty analysis of reactor performance parameters

    International Nuclear Information System (INIS)

    Greenspan, E.

    1982-01-01

    This chapter presents the mathematical basis for sensitivity functions, discusses their physical meaning and information they contain, and clarifies a number of issues concerning their application, including the definition of group sensitivities, the selection of sensitivity functions to be included in the analysis, and limitations of sensitivity theory. Examines the theoretical foundation; criticality reset sensitivities; group sensitivities and uncertainties; selection of sensitivities included in the analysis; and other uses and limitations of sensitivity functions. Gives the theoretical formulation of sensitivity functions pertaining to ''as-built'' designs for performance parameters of the form of ratios of linear flux functionals (such as reaction-rate ratios), linear adjoint functionals, bilinear functions (such as reactivity worth ratios), and for reactor reactivity. Offers a consistent procedure for reducing energy-dependent or fine-group sensitivities and uncertainties to broad group sensitivities and uncertainties. Provides illustrations of sensitivity functions as well as references to available compilations of such functions and of total sensitivities. Indicates limitations of sensitivity theory originating from the fact that this theory is based on a first-order perturbation theory

  5. The surface analysis methods; Les methodes d`analyse des surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Deville, J.P. [Institut de Physique et Chimie, 67 - Strasbourg (France)

    1998-11-01

    Nowadays, there are a lot of surfaces analysis methods, each having its specificity, its qualities, its constraints (for instance vacuum) and its limits. Expensive in time and in investment, these methods have to be used deliberately. This article appeals to non specialists. It gives some elements of choice according to the studied information, the sensitivity, the use constraints or the answer to a precise question. After having recalled the fundamental principles which govern these analysis methods, based on the interaction between radiations (ultraviolet, X) or particles (ions, electrons) with matter, two methods will be more particularly described: the Auger electron spectroscopy (AES) and x-rays photoemission spectroscopy (ESCA or XPS). Indeed, they are the most widespread methods in laboratories, the easier for use and probably the most productive for the analysis of surface of industrial materials or samples submitted to treatments in aggressive media. (O.M.) 11 refs.

  6. Atomic force microscopy analysis of different surface treatments of Ti dental implant surfaces

    International Nuclear Information System (INIS)

    Bathomarco, R.V.; Solorzano, G.; Elias, C.N.; Prioli, R.

    2004-01-01

    The surface of commercial unalloyed titanium, used in dental implants, was analyzed by atomic force microscopy. The morphology, roughness, and surface area of the samples, submitted to mechanically-induced erosion, chemical etching and a combination of both, were compared. The results show that surface treatments strongly influence the dental implant physical and chemical properties. An analysis of the length dependence of the implant surface roughness shows that, for scan sizes larger than 50 μm, the average surface roughness is independent of the scanning length and that the surface treatments lead to average surface roughness in the range of 0.37 up to 0.48 μm. It is shown that the implant surface energy is sensitive to the titanium surface area. As the area increases there is a decrease in the surface contact angle

  7. Atomic force microscopy analysis of different surface treatments of Ti dental implant surfaces

    Science.gov (United States)

    Bathomarco, Ti R. V.; Solorzano, G.; Elias, C. N.; Prioli, R.

    2004-06-01

    The surface of commercial unalloyed titanium, used in dental implants, was analyzed by atomic force microscopy. The morphology, roughness, and surface area of the samples, submitted to mechanically-induced erosion, chemical etching and a combination of both, were compared. The results show that surface treatments strongly influence the dental implant physical and chemical properties. An analysis of the length dependence of the implant surface roughness shows that, for scan sizes larger than 50 μm, the average surface roughness is independent of the scanning length and that the surface treatments lead to average surface roughness in the range of 0.37 up to 0.48 μm. It is shown that the implant surface energy is sensitive to the titanium surface area. As the area increases there is a decrease in the surface contact angle.

  8. Quantitative XPS analysis of high Tc superconductor surfaces

    International Nuclear Information System (INIS)

    Jablonski, A.; Sanada, N.; Suzuki, Y.; Fukuda, Y.; Nagoshi, M.

    1993-01-01

    The procedure of quantitative XPS analysis involving the relative sensitivity factors is most convenient to apply to high T c superconductor surfaces because this procedure does not require standards. However, a considerable limitation of such an approach is its relatively low accuracy. In the present work, a proposition is made to use for this purpose a modification of the relative sensitivity factor approach accounting for the matrix and the instrumental effects. The accuracy of this modification when applied to the binary metal alloys is 2% or better. A quantitative XPS analysis was made for surfaces of the compounds Bi 2 Sr 2 CuO 6 , Bi 2 Sr 2 CaCu 2 O 8 , and YBa 2 Cu 3 O Y . The surface composition determined for the polycrystalline samples corresponds reasonably well to the bulk stoichiometry. Slight deficiency of oxygen was found for the Bi-based compounds. The surface exposed on cleavage of the Bi 2 Sr 2 CaCu 2 O 8 single crystal was found to be enriched with bismuth, which indicates that the cleavage occurs along the BiO planes. This result is in agreement with the STM studies published in the literature

  9. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    Science.gov (United States)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  10. Non-spectroscopic surface plasmon sensor with a tunable sensitivity

    International Nuclear Information System (INIS)

    Wen, Qiuling; Han, Xu; Hu, Chuang; Zhang, Jiasen

    2015-01-01

    We demonstrate a non-spectroscopic surface plasmon sensor with a tunable sensitivity which is based on the relationship between the wave number of surface plasmon polaritons (SPPs) on metal film and the refractive index of the specimen in contact with the metal film. A change in the wave number of the SPPs results in a variation in the propagation angle of the leakage radiation of the SPPs. A reference light is used to interfere with the leakage radiation, and the refractive index of the specimen can be obtained by measuring the period of the interference fringes. The sensitivity of the sensor can be tuned by changing the incident direction of the reference light and this cannot be realized by conventional surface plasmon sensors. For a reference angle of 1.007°, the sensitivity and resolution of the sensor are 4629 μm/RIU (RIU stands for refractive index unit) and 3.6 × 10 −4 RIU, respectively. In addition, the sensor only needs a monochromatic light source, which simplifies the measurement setup and reduces the cost

  11. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  12. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.

    Science.gov (United States)

    Di Simone, Alessio

    2016-06-25

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions.

  13. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    Science.gov (United States)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  14. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  15. Track sensitivity and the surface roughness measurements of CR-39 with atomic force microscope

    CERN Document Server

    Yasuda, N; Amemiya, K; Takahashi, H; Kyan, A; Ogura, K

    1999-01-01

    Atomic Force Microscope (AFM) has been applied to evaluate the surface roughness and the track sensitivity of CR-39 track detector. We experimentally confirmed the inverse correlation between the track sensitivity and the roughness of the detector surface after etching. The surface of CR-39 (CR-39 doped with antioxidant (HARZLAS (TD-1)) and copolymer of CR-39/NIPAAm (TNF-1)) with high sensitivity becomes rough by the etching, while the pure CR-39 (BARYOTRAK) with low sensitivity keeps its original surface clarity even for the long etching.

  16. Surface science study of selective ethylene epoxidation catalyzed by the Ag(110) surface: Structural sensitivity

    International Nuclear Information System (INIS)

    Campbell, C.T.

    1984-01-01

    The selective oxidation of ethylene to ethylene epoxide (C 2 H 4 +1/2O 2 →C 2 H 4 O) over Ag is the simplest example of kinetically controlled, selective heterogeneous catalysis. We have studied the steady-state kinetics and selectivity of this reaction for the first time on a clean, well-characterized Ag(110) surface by using a special apparatus which allows rapid (approx.20 s) transfer between a high-pressure catalytic microreactor and an ultrahigh vacuum surface analysis (AES, XPS, LEED, TDS) chamber. The effects of temperature and reactant pressures upon the rate and selectivity are virtually identical on Ag(110) and supported, high surface area Ag catalysts. The absolute specific rate (per Ag surface atom) is, however, some 100-fold higher for Ag(110) than for high surface area catalysts. This is related to the well-known structural sensitivity of this reaction. It is postulated that a small percentage of (110) planes (or [110]-like sites) are responsible for most of the catalytic activity of high surface area catalysts. The high activity of the (110) plane is attributed to its high sticking probability for dissociative oxygen adsorption, since the rate of ethylene epoxidation is shown in a related work [Ref. 1: C. T. Campbell and M. T. Paffett, Surf. Sci. (in press)] to be proportional to the coverage of atomically adsorbed oxygen at constant temperature and ethylene pressure

  17. MOVES regional level sensitivity analysis

    Science.gov (United States)

    2012-01-01

    The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...

  18. Engineering of Surface Chemistry for Enhanced Sensitivity in Nanoporous Interferometric Sensing Platforms.

    Science.gov (United States)

    Law, Cheryl Suwen; Sylvia, Georgina M; Nemati, Madieh; Yu, Jingxian; Losic, Dusan; Abell, Andrew D; Santos, Abel

    2017-03-15

    We explore new approaches to engineering the surface chemistry of interferometric sensing platforms based on nanoporous anodic alumina (NAA) and reflectometric interference spectroscopy (RIfS). Two surface engineering strategies are presented, namely (i) selective chemical functionalization of the inner surface of NAA pores with amine-terminated thiol molecules and (ii) selective chemical functionalization of the top surface of NAA with dithiol molecules. The strong molecular interaction of Au 3+ ions with thiol-containing functional molecules of alkane chain or peptide character provides a model sensing system with which to assess the sensitivity of these NAA platforms by both molecular feature and surface engineering. Changes in the effective optical thickness of the functionalized NAA photonic films (i.e., sensing principle), in response to gold ions, are monitored in real-time by RIfS. 6-Amino-1-hexanethiol (inner surface) and 1,6-hexanedithiol (top surface), the most sensitive functional molecules from approaches i and ii, respectively, were combined into a third sensing strategy whereby the NAA platforms are functionalized on both the top and inner surfaces concurrently. Engineering of the surface according to this approach resulted in an additive enhancement in sensitivity of up to 5-fold compared to previously reported systems. This study advances the rational engineering of surface chemistry for interferometric sensing on nanoporous platforms with potential applications for real-time monitoring of multiple analytes in dynamic environments.

  19. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  20. Sensitivity Analysis Without Assumptions.

    Science.gov (United States)

    Ding, Peng; VanderWeele, Tyler J

    2016-05-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.

  1. Sensitivity analysis of critical experiment with direct perturbation compared to TSUNAMI-3D sensitivity analysis

    International Nuclear Information System (INIS)

    Barber, A. D.; Busch, R.

    2009-01-01

    The goal of this work is to obtain sensitivities from direct uncertainty analysis calculation and correlate those calculated values with the sensitivities produced from TSUNAMI-3D (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation in Three Dimensions). A full sensitivity analysis is performed on a critical experiment to determine the overall uncertainty of the experiment. Small perturbation calculations are performed for all known uncertainties to obtain the total uncertainty of the experiment. The results from a critical experiment are only known as well as the geometric and material properties. The goal of this relationship is to simplify the uncertainty quantification process in assessing a critical experiment, while still considering all of the important parameters. (authors)

  2. Sensitivity analysis in multi-parameter probabilistic systems

    International Nuclear Information System (INIS)

    Walker, J.R.

    1987-01-01

    Probabilistic methods involving the use of multi-parameter Monte Carlo analysis can be applied to a wide range of engineering systems. The output from the Monte Carlo analysis is a probabilistic estimate of the system consequence, which can vary spatially and temporally. Sensitivity analysis aims to examine how the output consequence is influenced by the input parameter values. Sensitivity analysis provides the necessary information so that the engineering properties of the system can be optimized. This report details a package of sensitivity analysis techniques that together form an integrated methodology for the sensitivity analysis of probabilistic systems. The techniques have known confidence limits and can be applied to a wide range of engineering problems. The sensitivity analysis methodology is illustrated by performing the sensitivity analysis of the MCROC rock microcracking model

  3. Global optimization and sensitivity analysis

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1990-01-01

    A new direction for the analysis of nonlinear models of nuclear systems is suggested to overcome fundamental limitations of sensitivity analysis and optimization methods currently prevalent in nuclear engineering usage. This direction is toward a global analysis of the behavior of the respective system as its design parameters are allowed to vary over their respective design ranges. Presented is a methodology for global analysis that unifies and extends the current scopes of sensitivity analysis and optimization by identifying all the critical points (maxima, minima) and solution bifurcation points together with corresponding sensitivities at any design point of interest. The potential applicability of this methodology is illustrated with test problems involving multiple critical points and bifurcations and comprising both equality and inequality constraints

  4. Linear regression and sensitivity analysis in nuclear reactor design

    International Nuclear Information System (INIS)

    Kumar, Akansha; Tsvetkov, Pavel V.; McClarren, Ryan G.

    2015-01-01

    Highlights: • Presented a benchmark for the applicability of linear regression to complex systems. • Applied linear regression to a nuclear reactor power system. • Performed neutronics, thermal–hydraulics, and energy conversion using Brayton’s cycle for the design of a GCFBR. • Performed detailed sensitivity analysis to a set of parameters in a nuclear reactor power system. • Modeled and developed reactor design using MCNP, regression using R, and thermal–hydraulics in Java. - Abstract: The paper presents a general strategy applicable for sensitivity analysis (SA), and uncertainity quantification analysis (UA) of parameters related to a nuclear reactor design. This work also validates the use of linear regression (LR) for predictive analysis in a nuclear reactor design. The analysis helps to determine the parameters on which a LR model can be fit for predictive analysis. For those parameters, a regression surface is created based on trial data and predictions are made using this surface. A general strategy of SA to determine and identify the influential parameters those affect the operation of the reactor is mentioned. Identification of design parameters and validation of linearity assumption for the application of LR of reactor design based on a set of tests is performed. The testing methods used to determine the behavior of the parameters can be used as a general strategy for UA, and SA of nuclear reactor models, and thermal hydraulics calculations. A design of a gas cooled fast breeder reactor (GCFBR), with thermal–hydraulics, and energy transfer has been used for the demonstration of this method. MCNP6 is used to simulate the GCFBR design, and perform the necessary criticality calculations. Java is used to build and run input samples, and to extract data from the output files of MCNP6, and R is used to perform regression analysis and other multivariate variance, and analysis of the collinearity of data

  5. New sensitive micro-measurements of dynamic surface tension and diffusion coefficients

    DEFF Research Database (Denmark)

    Kinoshita, Koji; Ortiz, Elisa Parra; Needham, David

    2017-01-01

    Currently available dynamic surface tension (DST) measurement methods, such as Wilhelmy plate, droplet- or bubble-based methods, still have various experimental limitations such as the large size of the interface, convection in the solution, or a certain “dead time” at initial measurement....... These limitations create inconsistencies for the kinetic analysis of surfactant adsorption/desorption, especially significant for ionic surfactants. Here, the “micropipette interfacial area-expansion method” was introduced and validated as a new DST measurement having a high enough sensitivity to detect diffusion...... for surface excess concentration. We found that the measured diffusion coefficient of 1-Octanol, 7.2 ± 0.8 × 10−6 cm2/s, showed excellent agreement with the result from an alternative method, “single microdroplet catching method”, to measure the diffusion coefficient from diffusion-controlled microdroplet...

  6. A surface plasmon resonance-based immunosensors for sensitive detection of heroin

    International Nuclear Information System (INIS)

    Wu Zhongcheng; Wang Lianchao; Ge Yu; Yu Chengduan; Fang Tingjian; Chen Wenge

    2000-01-01

    A simple technique for sensitive detection of heroine based on surface-plasmon resonance has been theoretically and experimentally investigated. The experiment was realized by using an anti-MO monoclonal antibody and a morphine (MO)-bovine serum albumin (MO-BSA) conjugate (antigen). The reason for using MO-BSA in the detection of heroine was also discussed. MO-BSA was immobilized on a gold thin film of SPR sensor chip by physical adsorption. The configuration of the device is allowed to be further miniaturized, which is required for the construction of a portable SPR device in the application of in-situ analysis

  7. Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces

    DEFF Research Database (Denmark)

    Rinker, Jennifer M.

    2016-01-01

    at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four......This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a high-dimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data...... turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project...

  8. A new surface resistance measurement method with ultrahigh sensitivity

    International Nuclear Information System (INIS)

    Liang, Changnian.

    1993-01-01

    A superconducting niobium triaxial cavity has been designed and fabricated to study residual surface resistance of planar superconducting materials. The edge of a 25.4 mm or larger diameter sample in the triaxial cavity is located outside the strong field region. Therefore, the edge effects and possible losses between the thin film and the substrate have been minimized, ensuring that induced RF losses are intrinsic to the test material. The fundamental resonant frequency of the cavity is the same as the working frequency of CEBAF cavities. The cavity has a compact size compared to its TE 011 counterpart, which makes it more sensitive to the sample's loss. For even higher sensitivity, a calorimetry method has been used to measure the RF losses on the superconducting sample. At 2 K, a 2 μK temperature change can be resolved by using carbon resistor sensors. The temperature distribution caused by RF heating is measured by 16 carbon composition resistor sensors. A 0.05 μW heating power can be detected as such a resolution, which translates to a surface resistance of 0.02 nΩ at a surface magnetic field of 52 Oe. This is the most sensitive device for surface resistance measurements to date. In addition, losses due to the indium seal, coupling probes, field emission sites other than the sample, and all of the high field resonator surface, are excluded in the measurement. Surface resistance of both niobium and high-Tc superconducting thin films has been measured. A low R s of 35.2 μΩ was measured for a 25.4 mm diameter YBa 2 Cu 3 O 7 thin film at 1.5 GHz and at 2 K. The measurement result is the first result for a large area epitaxially grown thin film sample at such a low RF frequency. The abrupt disappearance of multipacting between two parallel plates has been observed and monitored with the 16 temperature mapping sensors. Field emission or some field dependent anomalous RF losses on the niobium plate have also been observed

  9. Chemical kinetic functional sensitivity analysis: Elementary sensitivities

    International Nuclear Information System (INIS)

    Demiralp, M.; Rabitz, H.

    1981-01-01

    Sensitivity analysis is considered for kinetics problems defined in the space--time domain. This extends an earlier temporal Green's function method to handle calculations of elementary functional sensitivities deltau/sub i//deltaα/sub j/ where u/sub i/ is the ith species concentration and α/sub j/ is the jth system parameter. The system parameters include rate constants, diffusion coefficients, initial conditions, boundary conditions, or any other well-defined variables in the kinetic equations. These parameters are generally considered to be functions of position and/or time. Derivation of the governing equations for the sensitivities and the Green's funciton are presented. The physical interpretation of the Green's function and sensitivities is given along with a discussion of the relation of this work to earlier research

  10. Probabilistic sensitivity analysis of biochemical reaction systems.

    Science.gov (United States)

    Zhang, Hong-Xuan; Dempsey, William P; Goutsias, John

    2009-09-07

    Sensitivity analysis is an indispensable tool for studying the robustness and fragility properties of biochemical reaction systems as well as for designing optimal approaches for selective perturbation and intervention. Deterministic sensitivity analysis techniques, using derivatives of the system response, have been extensively used in the literature. However, these techniques suffer from several drawbacks, which must be carefully considered before using them in problems of systems biology. We develop here a probabilistic approach to sensitivity analysis of biochemical reaction systems. The proposed technique employs a biophysically derived model for parameter fluctuations and, by using a recently suggested variance-based approach to sensitivity analysis [Saltelli et al., Chem. Rev. (Washington, D.C.) 105, 2811 (2005)], it leads to a powerful sensitivity analysis methodology for biochemical reaction systems. The approach presented in this paper addresses many problems associated with derivative-based sensitivity analysis techniques. Most importantly, it produces thermodynamically consistent sensitivity analysis results, can easily accommodate appreciable parameter variations, and allows for systematic investigation of high-order interaction effects. By employing a computational model of the mitogen-activated protein kinase signaling cascade, we demonstrate that our approach is well suited for sensitivity analysis of biochemical reaction systems and can produce a wealth of information about the sensitivity properties of such systems. The price to be paid, however, is a substantial increase in computational complexity over derivative-based techniques, which must be effectively addressed in order to make the proposed approach to sensitivity analysis more practical.

  11. Enhancing dye-sensitized solar cell efficiency by anode surface treatments

    International Nuclear Information System (INIS)

    Chang, Chao-Hsuan; Lin, Hsin-Han; Chen, Chin-Cheng; Hong, Franklin C.-N.

    2014-01-01

    In this study, titanium substrates treated with HF solution and KOH solution sequentially forming micro- and nano-structures were used for the fabrication of flexible dye-sensitized solar cells (DSSCs). After wet etching treatments, the titanium substrates were then exposed to the O 2 plasma treatment and further immersed in titanium tetrachloride (TiCl 4 ) solution. The process conditions for producing a very thin TiO 2 blocking layer were studied, in order to avoid solar cell current leakage for increasing the solar cell efficiency. Subsequently, TiO 2 nanoparticles were spin-coated on Ti substrates with varied thickness. The dye-sensitized solar cells on the titanium substrates were subjected to simulate AM 1.5 G irradiation of 100 mW/cm 2 using backside illumination mode. Surface treatments of Ti substrate and TiO 2 anode were found to play a significant role in improving the efficiency of DSSC. The efficiencies of the backside illumination solar cells were raised from 4.6% to 7.8% by integrating these surface treatments. - Highlights: • The flexible dye-sensitized solar cell (DSSC) device can be fabricated. • Many effective surface treatment methods to improve DSSC efficiency are elucidated. • The efficiency is dramatically enhanced by integrating surface treatment methods. • The back-illuminated DSSC efficiency was raised from 4.6% to 7.8%

  12. A hybrid approach for global sensitivity analysis

    International Nuclear Information System (INIS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2017-01-01

    Distribution based sensitivity analysis (DSA) computes sensitivity of the input random variables with respect to the change in distribution of output response. Although DSA is widely appreciated as the best tool for sensitivity analysis, the computational issue associated with this method prohibits its use for complex structures involving costly finite element analysis. For addressing this issue, this paper presents a method that couples polynomial correlated function expansion (PCFE) with DSA. PCFE is a fully equivalent operational model which integrates the concepts of analysis of variance decomposition, extended bases and homotopy algorithm. By integrating PCFE into DSA, it is possible to considerably alleviate the computational burden. Three examples are presented to demonstrate the performance of the proposed approach for sensitivity analysis. For all the problems, proposed approach yields excellent results with significantly reduced computational effort. The results obtained, to some extent, indicate that proposed approach can be utilized for sensitivity analysis of large scale structures. - Highlights: • A hybrid approach for global sensitivity analysis is proposed. • Proposed approach integrates PCFE within distribution based sensitivity analysis. • Proposed approach is highly efficient.

  13. Maternal sensitivity: a concept analysis.

    Science.gov (United States)

    Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae

    2008-11-01

    The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.

  14. Sensitivity analysis for hydrology and pesticide supply towards the river in SWAT

    Science.gov (United States)

    Holvoet, K.; van Griensven, A.; Seuntjens, P.; Vanrolleghem, P. A.

    The dynamic behaviour of pesticides in river systems strongly depends on varying climatological conditions and agricultural management practices. To describe this behaviour at the river-basin scale, integrated hydrological and water quality models are needed. A crucial step in understanding the various processes determining pesticide fate is to perform a sensitivity analysis. Sensitivity analysis for hydrology and pesticide supply in SWAT (Soil and Water Assessment Tool) will provide useful support for the development of a reliable hydrological model and will give insight in which parameters are most sensitive concerning pesticide supply towards rivers. The study was performed on the Nil catchment in Belgium. In this study we utilised an LH-OAT sensitivity analysis. The LH-OAT method combines the One-factor-At-a-Time (OAT) design and Latin Hypercube (LH) sampling by taking the Latin Hypercube samples as initial points for an OAT design. By means of the LH-OAT sensitivity analysis, the dominant hydrological parameters were determined and a reduction of the number of model parameters was performed. Dominant hydrological parameters were the curve number (CN2), the surface runoff lag (surlag), the recharge to deep aquifer (rchrg_dp) and the threshold depth of water in the shallow aquifer (GWQMN). Next, the selected parameters were estimated by manual calibration. Hereby, the Nash-Sutcliffe coefficient of efficiency improved from an initial value of -22.4 to +0.53. In the second part, sensitivity analyses were performed to provide insight in which parameters or model inputs contribute most to variance in pesticide output. The results of this study show that for the Nil catchment, hydrologic parameters are dominant in controlling pesticide predictions. The other parameter that affects pesticide concentrations in surface water is ‘apfp_pest’, which meaning was changed into a parameter that controls direct losses to the river system (e.g., through the clean up of spray

  15. On the sensitivity of mesoscale models to surface-layer parameterization constants

    Science.gov (United States)

    Garratt, J. R.; Pielke, R. A.

    1989-09-01

    The Colorado State University standard mesoscale model is used to evaluate the sensitivity of one-dimensional (1D) and two-dimensional (2D) fields to differences in surface-layer parameterization “constants”. Such differences reflect the range in the published values of the von Karman constant, Monin-Obukhov stability functions and the temperature roughness length at the surface. The sensitivity of 1D boundary-layer structure, and 2D sea-breeze intensity, is generally less than that found in published comparisons related to turbulence closure schemes generally.

  16. Sensitive molecular diagnostics using surface-enhanced resonance Raman scattering (SERRS)

    Science.gov (United States)

    Faulds, Karen; Graham, Duncan; McKenzie, Fiona; MacRae, Douglas; Ricketts, Alastair; Dougan, Jennifer

    2009-02-01

    Surface enhanced resonance Raman scattering (SERRS) is an analytical technique with several advantages over competitive techniques in terms of improved sensitivity and multiplexing. We have made great progress in the development of SERRS as a quantitative analytical method, in particular for the detection of DNA. SERRS is an extremely sensitive and selective technique which when applied to the detection of labelled DNA sequences allows detection limits to be obtained which rival, and in most cases, are better than fluorescence. Here the conditions are explored which will enable the successful detection of DNA using SERRS. The enhancing surface which is used is crucial and in this case suspensions of nanoparticles were used as they allow quantitative behaviour to be achieved and allow analogous systems to current fluorescence based systems to be made. The aggregation conditions required to obtain SERRS of DNA are crucial and herein we describe the use of spermine as an aggregating agent. The nature of the label which is used, be it fluorescent, positively or negatively charged also effects the SERRS response and these conditions are again explored here. We have clearly demonstrated the ability to identify the components of a mixture of 5 analytes in solution by using two different excitation wavelengths and also of a 6-plex using data analysis techniques. These conditions will allow the use of SERRS for the detection of target DNA in a meaningful diagnostic assay.

  17. Numerical Simulation of Heavy Rainfall in August 2014 over Japan and Analysis of Its Sensitivity to Sea Surface Temperature

    Directory of Open Access Journals (Sweden)

    Yuki Minamiguchi

    2018-02-01

    Full Text Available This study evaluated the performance of the Weather Research and Forecasting (WRF model version 3.7 for simulating a series of rainfall events in August 2014 over Japan and investigated the impact of uncertainty in sea surface temperature (SST on simulated rainfall in the record-high precipitation period. WRF simulations for the heavy rainfall were conducted for six different cases. The heavy rainfall events caused by typhoons and rain fronts were similarly accurately reproduced by three cases: the TQW_5km case with grid nudging for air temperature, humidity, and wind and with a horizontal resolution of 5 km; W_5km with wind nudging and 5-km resolution; and W_2.5km with wind nudging and 2.5-km resolution. Because the nudging for air temperature and humidity in TQW_5km suppresses the influence of SST change, and because W_2.5km requires larger computational load, W_5km was selected as the baseline case for a sensitivity analysis of SST. In the sensitivity analysis, SST around Japan was homogeneously changed by 1 K from the original SST data. The analysis showed that the SST increase led to a larger amount of precipitation in the study period in Japan, with the mean increase rate of precipitation being 13 ± 8% K−1. In addition, 99 percentile precipitation (100 mm d−1 in the baseline case increased by 13% K−1 of SST warming. These results also indicate that an uncertainty of approximately 13% in the simulated heavy rainfall corresponds to an uncertainty of 1 K in SST data around Japan in the study period.

  18. Rapid and sensitive detection of malachite green in aquaculture water by electrochemical preconcentration and surface-enhanced Raman scattering.

    Science.gov (United States)

    Xu, Kai-Xuan; Guo, Mei-Hong; Huang, Yu-Ping; Li, Xiao-Dong; Sun, Jian-Jun

    2018-04-01

    A highly sensitive and rapid method of in-situ surface-enhanced Raman spectroscopy (SERS) combining with electrochemical preconcentration (EP) in detecting malachite green (MG) in aquaculture water was established. Ag nanoparticles (AgNPs) were synthesized and spread onto the surface of gold electrodes after centrifuging to produce SERS-active substrates. After optimizing the pH values, preconcentration potentials and times, in-situ EP-SERS detection was carried out. A sensitive and rapid analysis of the low-concentration MG was accomplished within 200s and the limit of detection was 2.4 × 10 -16 M. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Inverse modelling of Köhler theory – Part 1: A response surface analysis of CCN spectra with respect to surface-active organic species

    Directory of Open Access Journals (Sweden)

    S. Lowe

    2016-09-01

    Full Text Available In this study a novel framework for inverse modelling of cloud condensation nuclei (CCN spectra is developed using Köhler theory. The framework is established by using model-generated synthetic measurements as calibration data for a parametric sensitivity analysis. Assessment of the relative importance of aerosol physicochemical parameters, while accounting for bulk–surface partitioning of surface-active organic species, is carried out over a range of atmospherically relevant supersaturations. By introducing an objective function that provides a scalar metric for diagnosing the deviation of modelled CCN concentrations from synthetic observations, objective function response surfaces are presented as a function of model input parameters. Crucially, for the chosen calibration data, aerosol–CCN spectrum closure is confirmed as a well-posed inverse modelling exercise for a subset of the parameters explored herein. The response surface analysis indicates that the appointment of appropriate calibration data is particularly important. To perform an inverse aerosol–CCN closure analysis and constrain parametric uncertainties, it is shown that a high-resolution CCN spectrum definition of the calibration data is required where single-valued definitions may be expected to fail. Using Köhler theory to model CCN concentrations requires knowledge of many physicochemical parameters, some of which are difficult to measure in situ on the scale of interest and introduce a considerable amount of parametric uncertainty to model predictions. For all partitioning schemes and environments modelled, model output showed significant sensitivity to perturbations in aerosol log-normal parameters describing the accumulation mode, surface tension, organic : inorganic mass ratio, insoluble fraction, and solution ideality. Many response surfaces pertaining to these parameters contain well-defined minima and are therefore good candidates for calibration using a Monte

  20. SURFAN, a programme for surface analysis

    International Nuclear Information System (INIS)

    Negoita, F.; Borcan, C.; Pantelica, D.

    1997-01-01

    Possible alternatives to Rutherford backscattering spectrometry (RBS) method of material analysis, overcoming the poor sensitivity to light elements of RBS, are the nuclear resonant reaction analysis (NRA) and elastic recoil detection analysis (ERDA). The last one is especially useful in surface and thin film analysis. To simulate the spectra obtained with any of these methods a programme SURFAN was worked out. In comparison with the code RUMP, published by Doolittle, it allows to simply change the charge of the projectile nature, implies no limitation to the energy of incident projectiles and permits the use of any depth profile function. The basic ideas and the structure of SURFAN are presented. Its application to ERDA and RBS methods resulted in important information on the processes implied in special materials obtained by advanced technologies

  1. Management-oriented sensitivity analysis for pesticide transport in watershed-scale water quality modeling using SWAT.

    Science.gov (United States)

    Luo, Yuzhou; Zhang, Minghua

    2009-12-01

    The Soil and Water Assessment Tool (SWAT) was calibrated for hydrology conditions in an agricultural watershed of Orestimba Creek, California, and applied to simulate fate and transport of two organophosphate pesticides chlorpyrifos and diazinon. The model showed capability in evaluating pesticide fate and transport processes in agricultural fields and instream network. Management-oriented sensitivity analysis was conducted by applied stochastic SWAT simulations for pesticide distribution. Results of sensitivity analysis identified the governing processes in pesticide outputs as surface runoff, soil erosion, and sedimentation in the study area. By incorporating sensitive parameters in pesticide transport simulation, effects of structural best management practices (BMPs) in improving surface water quality were demonstrated by SWAT modeling. This study also recommends conservation practices designed to reduce field yield and in-stream transport capacity of sediment, such as filter strip, grassed waterway, crop residue management, and tailwater pond to be implemented in the Orestimba Creek watershed.

  2. Nominal Range Sensitivity Analysis of peak radionuclide concentrations in randomly heterogeneous aquifers

    International Nuclear Information System (INIS)

    Cadini, F.; De Sanctis, J.; Cherubini, A.; Zio, E.; Riva, M.; Guadagnini, A.

    2012-01-01

    Highlights: ► Uncertainty quantification problem associated with the radionuclide migration. ► Groundwater transport processes simulated within a randomly heterogeneous aquifer. ► Development of an automatic sensitivity analysis for flow and transport parameters. ► Proposal of a Nominal Range Sensitivity Analysis approach. ► Analysis applied to the performance assessment of a nuclear waste repository. - Abstract: We consider the problem of quantification of uncertainty associated with radionuclide transport processes within a randomly heterogeneous aquifer system in the context of performance assessment of a near-surface radioactive waste repository. Radionuclide migration is simulated at the repository scale through a Monte Carlo scheme. The saturated groundwater flow and transport equations are then solved at the aquifer scale for the assessment of the expected radionuclide peak concentration at a location of interest. A procedure is presented to perform the sensitivity analysis of this target environmental variable to key parameters that characterize flow and transport processes in the subsurface. The proposed procedure is exemplified through an application to a realistic case study.

  3. Geostationary Coastal and Air Pollution Events (GEO-CAPE) Sensitivity Analysis Experiment

    Science.gov (United States)

    Lee, Meemong; Bowman, Kevin

    2014-01-01

    Geostationary Coastal and Air pollution Events (GEO-CAPE) is a NASA decadal survey mission to be designed to provide surface reflectance at high spectral, spatial, and temporal resolutions from a geostationary orbit necessary for studying regional-scale air quality issues and their impact on global atmospheric composition processes. GEO-CAPE's Atmospheric Science Questions explore the influence of both gases and particles on air quality, atmospheric composition, and climate. The objective of the GEO-CAPE Observing System Simulation Experiment (OSSE) is to analyze the sensitivity of ozone to the global and regional NOx emissions and improve the science impact of GEO-CAPE with respect to the global air quality. The GEO-CAPE OSSE team at Jet propulsion Laboratory has developed a comprehensive OSSE framework that can perform adjoint-sensitivity analysis for a wide range of observation scenarios and measurement qualities. This report discusses the OSSE framework and presents the sensitivity analysis results obtained from the GEO-CAPE OSSE framework for seven observation scenarios and three instrument systems.

  4. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    Science.gov (United States)

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  5. Interference and Sensitivity Analysis.

    Science.gov (United States)

    VanderWeele, Tyler J; Tchetgen Tchetgen, Eric J; Halloran, M Elizabeth

    2014-11-01

    Causal inference with interference is a rapidly growing area. The literature has begun to relax the "no-interference" assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted.

  6. Management-oriented sensitivity analysis for pesticide transport in watershed-scale water quality modeling using SWAT

    International Nuclear Information System (INIS)

    Luo Yuzhou; Zhang Minghua

    2009-01-01

    The Soil and Water Assessment Tool (SWAT) was calibrated for hydrology conditions in an agricultural watershed of Orestimba Creek, California, and applied to simulate fate and transport of two organophosphate pesticides chlorpyrifos and diazinon. The model showed capability in evaluating pesticide fate and transport processes in agricultural fields and instream network. Management-oriented sensitivity analysis was conducted by applied stochastic SWAT simulations for pesticide distribution. Results of sensitivity analysis identified the governing processes in pesticide outputs as surface runoff, soil erosion, and sedimentation in the study area. By incorporating sensitive parameters in pesticide transport simulation, effects of structural best management practices (BMPs) in improving surface water quality were demonstrated by SWAT modeling. This study also recommends conservation practices designed to reduce field yield and in-stream transport capacity of sediment, such as filter strip, grassed waterway, crop residue management, and tailwater pond to be implemented in the Orestimba Creek watershed. - Selected structural BMPs are recommended for reducing loads of OP pesticides.

  7. Management-oriented sensitivity analysis for pesticide transport in watershed-scale water quality modeling using SWAT

    Energy Technology Data Exchange (ETDEWEB)

    Luo Yuzhou [University of California, Davis, CA 95616 (United States); Wenzhou Medical College, Wenzhou 325035 (China); Zhang Minghua, E-mail: mhzhang@ucdavis.ed [University of California, Davis, CA 95616 (United States); Wenzhou Medical College, Wenzhou 325035 (China)

    2009-12-15

    The Soil and Water Assessment Tool (SWAT) was calibrated for hydrology conditions in an agricultural watershed of Orestimba Creek, California, and applied to simulate fate and transport of two organophosphate pesticides chlorpyrifos and diazinon. The model showed capability in evaluating pesticide fate and transport processes in agricultural fields and instream network. Management-oriented sensitivity analysis was conducted by applied stochastic SWAT simulations for pesticide distribution. Results of sensitivity analysis identified the governing processes in pesticide outputs as surface runoff, soil erosion, and sedimentation in the study area. By incorporating sensitive parameters in pesticide transport simulation, effects of structural best management practices (BMPs) in improving surface water quality were demonstrated by SWAT modeling. This study also recommends conservation practices designed to reduce field yield and in-stream transport capacity of sediment, such as filter strip, grassed waterway, crop residue management, and tailwater pond to be implemented in the Orestimba Creek watershed. - Selected structural BMPs are recommended for reducing loads of OP pesticides.

  8. Resonant characteristics and sensitivity dependency on the contact surface in QCM-micropillar-based system of coupled resonator sensors

    International Nuclear Information System (INIS)

    Kashan, M A M; Kalavally, V; Ramakrishnan, N; Lee, H W

    2016-01-01

    We report the characteristics and sensitivity dependence over the contact surface in coupled resonating sensors (CRSs) made of high aspect ratio resonant micropillars attached to a quartz crystal microbalance (QCM). Through experiments and simulation, we observed that when the pillars of resonant heights were placed in maximum displacement regions the resonance frequency of the QCM increased following the coupled resonance characteristics, as the pillar offered elastic loading to the QCM surface. However, the same pillars when placed in relatively lower displacement regions, in spite of their resonant dimension, offered inertial loading and resulted in a decrease in QCM resonance frequency, as the displacement amplitude was insufficient to couple the vibrations from the QCM to the pillars. Accordingly, we discovered that the coupled resonance characteristics not only depend on the resonant structure dimensions but also on the contact regions in the acoustic device. Further analysis revealed that acoustic pressure at the contact surface also influences the resonance frequency characteristics and sensitivity of the CRS. To demonstrate the significance of the present finding for sensing applications, humidity sensing is considered as the example measurand. When a sensing medium made of resonant SU-8 pillars was placed in a maximum displacement region on a QCM surface, the sensitivity increased by 14 times in comparison to a resonant sensing medium placed in a lower displacement region of a QCM surface. (paper)

  9. Contributions to sensitivity analysis and generalized discriminant analysis

    International Nuclear Information System (INIS)

    Jacques, J.

    2005-12-01

    Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)

  10. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    International Nuclear Information System (INIS)

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-01-01

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary

  11. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations.

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  12. Energy and angle resolved ion scattering spectroscopy: new possibilities for surface analysis

    International Nuclear Information System (INIS)

    Hellings, G.J.A.

    1986-01-01

    In this thesis the design and development of a novel, very sensitive and high-resolving spectrometer for surface analysis is described. This spectrometer is designed for Energy and Angle Resolved Ion Scattering Spectroscopy (EARISS). There are only a few techniques that are sensitive enough to study the outermost atomic layer of surfaces. One of these techniques, Low-Energy Ion Scattering (LEIS), is discussed in chapter 2. Since LEIS is destructive, it is important to make a very efficient use of the scattered ions. This makes it attractive to simultaneously carry out energy and angle dependent measurements (EARISS). (Auth.)

  13. Sensitivity analysis of surface ozone to emission controls in Beijing and its neighboring area during the 2008 Olympic Games

    Institute of Scientific and Technical Information of China (English)

    Yi Gao; Meigen Zhang

    2012-01-01

    The regional air quality modeling system RAMS (Regional Atmospheric Modeling System)-CMAQ (Community Multi-scale Air Quality modeling system) is applied to analyze temporal and spatial variations in surface ozone concentration over Beijing and its surrounding region from July to October 2008.Comparison of simulated and observed meteorological elements and concentration of nitrogen oxides (NOx) and ozone at one urban site and three rural sites during Olympic Games show that model can generally reproduce the main observed feature of wind,temperature and ozone,but NOx concentration is overestimated.Although ozone concentration decreased during Olympics,high ozone episodes occurred on 24 July and 24 August with concentration of 360 and 245 μg/m3 at Aoyuncun site,respectively.The analysis of sensitive test,with and without emission controls,shows that emission controls could reduce ozone concentration in the afternoon when ozone concentration was highest but increase it at night and in the morning.The evolution of the weather system during the ozone episodes (24 July and 24 August) indicates that hot and dry air and a stable weak pressure field intensified the production of ozone and allowed it to accumulate.Process analysis at the urban site and rural site shows that under favorable weather condition on 24 August,horizontal transport was the main contributor of the rural place and the pollution from the higher layer would be transported to the surface layer.On 24 July,as the wind velocity was smaller,the impact of transport on the rural place was not obvious.

  14. Sensitivity analysis of surface ozone to emission controls in Beijing and its neighboring area during the 2008 Olympic Games.

    Science.gov (United States)

    Gao, Yi; Zhang, Meigen

    2012-01-01

    The regional air quality modeling system RAMS (regional atmospheric modeling system)-CMAQ (community multi-scale air quality modeling system) is applied to analyze temporal and spatial variations in surface ozone concentration over Beijing and its surrounding region from July to October 2008. Comparison of simulated and observed meteorological elements and concentration of nitrogen oxides (NOx) and ozone at one urban site and three rural sites during Olympic Games show that model can generally reproduce the main observed feature of wind, temperature and ozone, but NOx concentration is overestimated. Although ozone concentration decreased during Olympics, high ozone episodes occurred on 24 July and 24 August with concentration of 360 and 245 microg/m3 at Aoyuncun site, respectively. The analysis of sensitive test, with and without emission controls, shows that emission controls could reduce ozone concentration in the afternoon when ozone concentration was highest but increase it at night and in the morning. The evolution of the weather system during the ozone episodes (24 July and 24 August) indicates that hot and dry air and a stable weak pressure field intensified the production of ozone and allowed it to accumulate. Process analysis at the urban site and rural site shows that under favorable weather condition on 24 August, horizontal transport was the main contributor of the rural place and the pollution from the higher layer would be transported to the surface layer. On 24 July, as the wind velocity was smaller, the impact of transport on the rural place was not obvious.

  15. Sensitivity Analysis of b-factor in Microwave Emission Model for Soil Moisture Retrieval: A Case Study for SMAP Mission

    Directory of Open Access Journals (Sweden)

    Dugwon Seo

    2010-05-01

    Full Text Available Sensitivity analysis is critically needed to better understand the microwave emission model for soil moisture retrieval using passive microwave remote sensing data. The vegetation b-factor along with vegetation water content and surface characteristics has significant impact in model prediction. This study evaluates the sensitivity of the b-factor, which is function of vegetation type. The analysis is carried out using Passive and Active L and S-band airborne sensor (PALS and measured field soil moisture from Southern Great Plains experiment (SGP99. The results show that the relative sensitivity of the b-factor is 86% in wet soil condition and 88% in high vegetated condition compared to the sensitivity of the soil moisture. Apparently, the b-factor is found to be more sensitive than the vegetation water content, surface roughness and surface temperature; therefore, the effect of the b-factor is fairly large to the microwave emission in certain conditions. Understanding the dependence of the b-factor on the soil and vegetation is important in studying the soil moisture retrieval algorithm, which can lead to potential improvements in model development for the Soil Moisture Active-Passive (SMAP mission.

  16. Multimodal Nonlinear Optical Imaging for Sensitive Detection of Multiple Pharmaceutical Solid-State Forms and Surface Transformations.

    Science.gov (United States)

    Novakovic, Dunja; Saarinen, Jukka; Rojalin, Tatu; Antikainen, Osmo; Fraser-Miller, Sara J; Laaksonen, Timo; Peltonen, Leena; Isomäki, Antti; Strachan, Clare J

    2017-11-07

    Two nonlinear imaging modalities, coherent anti-Stokes Raman scattering (CARS) and sum-frequency generation (SFG), were successfully combined for sensitive multimodal imaging of multiple solid-state forms and their changes on drug tablet surfaces. Two imaging approaches were used and compared: (i) hyperspectral CARS combined with principal component analysis (PCA) and SFG imaging and (ii) simultaneous narrowband CARS and SFG imaging. Three different solid-state forms of indomethacin-the crystalline gamma and alpha forms, as well as the amorphous form-were clearly distinguished using both approaches. Simultaneous narrowband CARS and SFG imaging was faster, but hyperspectral CARS and SFG imaging has the potential to be applied to a wider variety of more complex samples. These methodologies were further used to follow crystallization of indomethacin on tablet surfaces under two storage conditions: 30 °C/23% RH and 30 °C/75% RH. Imaging with (sub)micron resolution showed that the approach allowed detection of very early stage surface crystallization. The surfaces progressively crystallized to predominantly (but not exclusively) the gamma form at lower humidity and the alpha form at higher humidity. Overall, this study suggests that multimodal nonlinear imaging is a highly sensitive, solid-state (and chemically) specific, rapid, and versatile imaging technique for understanding and hence controlling (surface) solid-state forms and their complex changes in pharmaceuticals.

  17. Object-sensitive Type Analysis of PHP

    NARCIS (Netherlands)

    Van der Hoek, Henk Erik; Hage, J

    2015-01-01

    In this paper we develop an object-sensitive type analysis for PHP, based on an extension of the notion of monotone frameworks to deal with the dynamic aspects of PHP, and following the framework of Smaragdakis et al. for object-sensitive analysis. We consider a number of instantiations of the

  18. Imaging of surface plasmon polariton interference using phase-sensitive scanning tunneling microscope

    NARCIS (Netherlands)

    Jose, J.; Segerink, Franciscus B.; Korterik, Jeroen P.; Herek, Jennifer Lynn; Offerhaus, Herman L.

    2011-01-01

    We report the surface plasmon polariton interference, generated via a ‘buried’ gold grating, and imaged using a phase-sensitive Photon Scanning Tunneling Microscope (PSTM). The phase-resolved PSTM measurement unravels the complex surface plasmon polariton interference fields at the gold-air

  19. An explanation for the different climate sensitivities of land and ocean surfaces based on the diurnal cycle

    Directory of Open Access Journals (Sweden)

    A. Kleidon

    2017-09-01

    Full Text Available Observations and climate model simulations consistently show a higher climate sensitivity of land surfaces compared to ocean surfaces. Here we show that this difference in temperature sensitivity can be explained by the different means by which the diurnal variation in solar radiation is buffered. While ocean surfaces buffer the diurnal variations by heat storage changes below the surface, land surfaces buffer it mostly by heat storage changes above the surface in the lower atmosphere that are reflected in the diurnal growth of a convective boundary layer. Storage changes below the surface allow the ocean surface–atmosphere system to maintain turbulent fluxes over day and night, while the land surface–atmosphere system maintains turbulent fluxes only during the daytime hours, when the surface is heated by absorption of solar radiation. This shorter duration of turbulent fluxes on land results in a greater sensitivity of the land surface–atmosphere system to changes in the greenhouse forcing because nighttime temperatures are shaped by radiative exchange only, which are more sensitive to changes in greenhouse forcing. We use a simple, analytic energy balance model of the surface–atmosphere system in which turbulent fluxes are constrained by the maximum power limit to estimate the effects of these different means to buffer the diurnal cycle on the resulting temperature sensitivities. The model predicts that land surfaces have a 50 % greater climate sensitivity than ocean surfaces, and that the nighttime temperatures on land increase about twice as much as daytime temperatures because of the absence of turbulent fluxes at night. Both predictions compare very well with observations and CMIP5 climate model simulations. Hence, the greater climate sensitivity of land surfaces can be explained by its buffering of diurnal variations in solar radiation in the lower atmosphere.

  20. High-precision drop shape analysis on inclining flat surfaces: introduction and comparison of this special method with commercial contact angle analysis.

    Science.gov (United States)

    Schmitt, Michael; Heib, Florian

    2013-10-07

    Drop shape analysis is one of the most important and frequently used methods to characterise surfaces in the scientific and industrial communities. An especially large number of studies, which use contact angle measurements to analyse surfaces, are characterised by incorrect or misdirected conclusions such as the determination of surface energies from poorly performed contact angle determinations. In particular, the characterisation of surfaces, which leads to correlations between the contact angle and other effects, must be critically validated for some publications. A large number of works exist concerning the theoretical and thermodynamic aspects of two- and tri-phase boundaries. The linkage between theory and experiment is generally performed by an axisymmetric drop shape analysis, that is, simulations of the theoretical drop profiles by numerical integration onto a number of points of the drop meniscus (approximately 20). These methods work very well for axisymmetric profiles such as those obtained by pendant drop measurements, but in the case of a sessile drop onto real surfaces, additional unknown and misunderstood effects on the dependence of the surface must be considered. We present a special experimental and practical investigation as another way to transition from experiment to theory. This procedure was developed to be especially sensitive to small variations in the dependence of the dynamic contact angle on the surface; as a result, this procedure will allow the properties of the surface to be monitored with a higher precession and sensitivity. In this context, water drops onto a 111 silicon wafer are dynamically measured by video recording and by inclining the surface, which results in a sequence of non-axisymmetric drops. The drop profiles are analysed by commercial software and by the developed and presented high-precision drop shape analysis. In addition to the enhanced sensitivity for contact angle determination, this analysis technique, in

  1. Enhancement in sensitivity of graphene-based zinc oxide assisted bimetallic surface plasmon resonance (SPR) biosensor

    Science.gov (United States)

    Kumar, Rajeev; Kushwaha, Angad S.; Srivastava, Monika; Mishra, H.; Srivastava, S. K.

    2018-03-01

    In the present communication, a highly sensitive surface plasmon resonance (SPR) biosensor with Kretschmann configuration having alternate layers, prism/zinc oxide/silver/gold/graphene/biomolecules (ss-DNA) is presented. The optimization of the proposed configuration has been accomplished by keeping the constant thickness of zinc oxide (32 nm), silver (32 nm), graphene (0.34 nm) layer and biomolecules (100 nm) for different values of gold layer thickness (1, 3 and 5 nm). The sensitivity of the proposed SPR biosensor has been demonstrated for a number of design parameters such as gold layer thickness, number of graphene layer, refractive index of biomolecules and the thickness of biomolecules layer. SPR biosensor with optimized geometry has greater sensitivity (66 deg/RIU) than the conventional (52 deg/RIU) as well as other graphene-based (53.2 deg/RIU) SPR biosensor. The effect of zinc oxide layer thickness on the sensitivity of SPR biosensor has also been analysed. From the analysis, it is found that the sensitivity increases significantly by increasing the thickness of zinc oxide layer. It means zinc oxide intermediate layer plays an important role to improve the sensitivity of the biosensor. The sensitivity of SPR biosensor also increases by increasing the number of graphene layer (upto nine layer).

  2. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA

    2016-04-08

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance

  3. Ethical sensitivity in professional practice: concept analysis.

    Science.gov (United States)

    Weaver, Kathryn; Morse, Janice; Mitcham, Carl

    2008-06-01

    This paper is a report of a concept analysis of ethical sensitivity. Ethical sensitivity enables nurses and other professionals to respond morally to the suffering and vulnerability of those receiving professional care and services. Because of its significance to nursing and other professional practices, ethical sensitivity deserves more focused analysis. A criteria-based method oriented toward pragmatic utility guided the analysis of 200 papers and books from the fields of nursing, medicine, psychology, dentistry, clinical ethics, theology, education, law, accounting or business, journalism, philosophy, political and social sciences and women's studies. This literature spanned 1970 to 2006 and was sorted by discipline and concept dimensions and examined for concept structure and use across various contexts. The analysis was completed in September 2007. Ethical sensitivity in professional practice develops in contexts of uncertainty, client suffering and vulnerability, and through relationships characterized by receptivity, responsiveness and courage on the part of professionals. Essential attributes of ethical sensitivity are identified as moral perception, affectivity and dividing loyalties. Outcomes include integrity preserving decision-making, comfort and well-being, learning and professional transcendence. Our findings promote ethical sensitivity as a type of practical wisdom that pursues client comfort and professional satisfaction with care delivery. The analysis and resulting model offers an inclusive view of ethical sensitivity that addresses some of the limitations with prior conceptualizations.

  4. Multitarget global sensitivity analysis of n-butanol combustion.

    Science.gov (United States)

    Zhou, Dingyu D Y; Davis, Michael J; Skodje, Rex T

    2013-05-02

    A model for the combustion of butanol is studied using a recently developed theoretical method for the systematic improvement of the kinetic mechanism. The butanol mechanism includes 1446 reactions, and we demonstrate that it is straightforward and computationally feasible to implement a full global sensitivity analysis incorporating all the reactions. In addition, we extend our previous analysis of ignition-delay targets to include species targets. The combination of species and ignition targets leads to multitarget global sensitivity analysis, which allows for a more complete mechanism validation procedure than we previously implemented. The inclusion of species sensitivity analysis allows for a direct comparison between reaction pathway analysis and global sensitivity analysis.

  5. Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces

    International Nuclear Information System (INIS)

    Rinker, Jennifer M.

    2016-01-01

    This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a highdimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project. The fit of the calibrated response surface is evaluated in terms of error between the model and the training data and in terms of the convergence. The Sobol SIs are calculated using the calibrated response surface, and the convergence is examined. The Sobol SIs reveal that, of the four turbulence parameters examined in this paper, the variance caused by the Kaimal length scale and nonstationarity parameter are negligible. Thus, the findings in this paper represent the first systematic evidence that stochastic wind turbine load response statistics can be modeled purely by mean wind wind speed and turbulence intensity. (paper)

  6. Surface Grafting of Ru(II) Diazonium-Based Sensitizers on Metal Oxides Enhances Alkaline Stability for Solar Energy Conversion.

    Science.gov (United States)

    Bangle, Rachel; Sampaio, Renato N; Troian-Gautier, Ludovic; Meyer, Gerald J

    2018-01-24

    The electrografting of [Ru(ttt)(tpy-C 6 H 4 -N 2 + )] 3+ , where "ttt" is 4,4',4″-tri-tert-butyl-2,2':6',2″-terpyridine, was investigated on several wide band gap metal oxide surfaces (TiO 2 , SnO 2 , ZrO 2 , ZnO, In 2 O 3 :Sn) and compared to structurally analogous sensitizers that differed only by the anchoring group, i.e., -PO 3 H 2 and -COOH. An optimized procedure for diazonium electrografting to semiconductor metal oxides is presented that allowed surface coverages that ranged between 4.7 × 10 -8 and 10.6 × 10 -8 mol cm -2 depending on the nature of the metal oxide. FTIR analysis showed the disappearance of the diazonium stretch at 2266 cm -1 after electrografting. XPS analysis revealed a characteristic peak of Ru 3d at 285 eV as well as a peak at 531.6 eV that was attributed to O 1s in Ti-O-C bonds. Photocurrents were measured to assess electron injection efficiency of these modified surfaces. The electrografted sensitizers exhibited excellent stability across a range of pHs spanning from 1 to 14, where classical binding groups such as carboxylic and phosphonic derivatives were hydrolyzed.

  7. Techniques for sensitivity analysis of SYVAC results

    International Nuclear Information System (INIS)

    Prust, J.O.

    1985-05-01

    Sensitivity analysis techniques may be required to examine the sensitivity of SYVAC model predictions to the input parameter values, the subjective probability distributions assigned to the input parameters and to the relationship between dose and the probability of fatal cancers plus serious hereditary disease in the first two generations of offspring of a member of the critical group. This report mainly considers techniques for determining the sensitivity of dose and risk to the variable input parameters. The performance of a sensitivity analysis technique may be improved by decomposing the model and data into subsets for analysis, making use of existing information on sensitivity and concentrating sampling in regions the parameter space that generates high doses or risks. A number of sensitivity analysis techniques are reviewed for their application to the SYVAC model including four techniques tested in an earlier study by CAP Scientific for the SYVAC project. This report recommends the development now of a method for evaluating the derivative of dose and parameter value and extending the Kruskal-Wallis technique to test for interactions between parameters. It is also recommended that the sensitivity of the output of each sub-model of SYVAC to input parameter values should be examined. (author)

  8. Sensitivity analysis of a PWR pressurizer

    International Nuclear Information System (INIS)

    Bruel, Renata Nunes

    1997-01-01

    A sensitivity analysis relative to the parameters and modelling of the physical process in a PWR pressurizer has been performed. The sensitivity analysis was developed by implementing the key parameters and theoretical model lings which generated a comprehensive matrix of influences of each changes analysed. The major influences that have been observed were the flashing phenomenon and the steam condensation on the spray drops. The present analysis is also applicable to the several theoretical and experimental areas. (author)

  9. Sensitivity Analysis of a Physiochemical Interaction Model ...

    African Journals Online (AJOL)

    In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 J. Appl. Sci. Environ. Manage. June, 2012, Vol.

  10. Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes

    Science.gov (United States)

    Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris

    2017-12-01

    Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.

  11. Surface Acoustic Wave Monitor for Deposition and Analysis of Ultra-Thin Films

    Science.gov (United States)

    Hines, Jacqueline H. (Inventor)

    2015-01-01

    A surface acoustic wave (SAW) based thin film deposition monitor device and system for monitoring the deposition of ultra-thin films and nanomaterials and the analysis thereof is characterized by acoustic wave device embodiments that include differential delay line device designs, and which can optionally have integral reference devices fabricated on the same substrate as the sensing device, or on a separate device in thermal contact with the film monitoring/analysis device, in order to provide inherently temperature compensated measurements. These deposition monitor and analysis devices can include inherent temperature compensation, higher sensitivity to surface interactions than quartz crystal microbalance (QCM) devices, and the ability to operate at extreme temperatures.

  12. Self-assembled two-dimensional gold nanoparticle film for sensitive nontargeted analysis of food additives with surface-enhanced Raman spectroscopy.

    Science.gov (United States)

    Wu, Yiping; Yu, Wenfang; Yang, Benhong; Li, Pan

    2018-05-15

    The use of different food additives and their active metabolites has been found to cause serious problems to human health. Thus, considering the potential effects on human health, developing a sensitive and credible analytical method for different foods is important. Herein, the application of solvent-driven self-assembled Au nanoparticles (Au NPs) for the rapid and sensitive detection of food additives in different commercial products is reported. The assembled substrates are highly sensitive and exhibit excellent uniformity and reproducibility because of uniformly distributed and high-density hot spots. The sensitive analyses of ciprofloxacin (CF), diethylhexyl phthalate (DEHP), tartrazine and azodicarbonamide at the 0.1 ppm level using this surface-enhanced Raman spectroscopy (SERS) substrate are given, and the results show that Au NP arrays can serve as efficient SERS substrates for the detection of food additives. More importantly, SERS spectra of several commercial liquors and sweet drinks are obtained to evaluate the addition of illegal additives. This SERS active platform can be used as an effective strategy in the detection of prohibited additives in food.

  13. pH-sensitive diamond field-effect transistors (FETs) with directly aminated channel surface

    International Nuclear Information System (INIS)

    Song, Kwang-Soup; Nakamura, Yusuke; Sasaki, Yuichi; Degawa, Munenori; Yang, Jung-Hoon; Kawarada, Hiroshi

    2006-01-01

    We have introduced pH sensors fabricated on diamond thin films through modification of the surface-terminated atom. We directly modified the diamond surface from hydrogen to amine or oxygen with ultraviolet (UV) irradiation under ammonia gas. The quantified amine site based on the spectra obtained by X-ray photoelectron spectroscopy (XPS) is 26% (2.6 x 10 14 cm -2 ) with UV irradiation for 8 h and its coverage is dependent on the UV irradiation time. This directly aminated diamond surface is stable with long-term exposure in air and electrolyte solution. We fabricated diamond solution-gate field-effect transistors (SGFETs) without insulating layers on the channel surface. These diamond SGFETs with amine modified by direct amination are sensitive to pH (45 mV/pH) over a wide range from pH 2 to 12 and their sensitivity is dependent on the density of binding sites corresponding to UV irradiation time on the channel surface

  14. Sensitivity Analysis of Viscoelastic Structures

    Directory of Open Access Journals (Sweden)

    A.M.G. de Lima

    2006-01-01

    Full Text Available In the context of control of sound and vibration of mechanical systems, the use of viscoelastic materials has been regarded as a convenient strategy in many types of industrial applications. Numerical models based on finite element discretization have been frequently used in the analysis and design of complex structural systems incorporating viscoelastic materials. Such models must account for the typical dependence of the viscoelastic characteristics on operational and environmental parameters, such as frequency and temperature. In many applications, including optimal design and model updating, sensitivity analysis based on numerical models is a very usefull tool. In this paper, the formulation of first-order sensitivity analysis of complex frequency response functions is developed for plates treated with passive constraining damping layers, considering geometrical characteristics, such as the thicknesses of the multi-layer components, as design variables. Also, the sensitivity of the frequency response functions with respect to temperature is introduced. As an example, response derivatives are calculated for a three-layer sandwich plate and the results obtained are compared with first-order finite-difference approximations.

  15. High sensitivity, high surface area Enzyme-linked Immunosorbent Assay (ELISA).

    Science.gov (United States)

    Singh, Harpal; Morita, Takahiro; Suzuki, Yuma; Shimojima, Masayuki; Le Van, An; Sugamata, Masami; Yang, Ming

    2015-01-01

    Enzyme-linked immunosorbent assays (ELISA) are considered the gold standard in the demonstration of various immunological reactions with an application in the detection of infectious diseases such as during outbreaks or in patient care. This study aimed to produce an ELISA-based diagnostic with an increased sensitivity of detection compared to the standard 96-well method in the immunologic diagnosis of infectious diseases. A '3DStack' was developed using readily available, low cost fabrication technologies namely nanoimprinting and press stamping with an increased surface area of 4 to 6 times more compared to 96-well plates. This was achieved by stacking multiple nanoimprinted polymer sheets. The flow of analytes between the sheets was enhanced by rotating the 3DStack and confirmed by Finite-Element (FE) simulation. An Immunoglobulin G (IgG) ELISA for the detection of antibodies in human serum raised against Rubella virus was performed for validation. An improved sensitivity of up to 1.9 folds higher was observed using the 3DStack compared to the standard method. The increased surface area of the 3DStack developed using nanoimprinting and press stamping technologies, and the flow pattern between sheets generated by rotating the 3DStack were potential contributors to a more sensitive ELISA-based diagnostic device.

  16. The sensitivity of biological finite element models to the resolution of surface geometry: a case study of crocodilian crania

    Directory of Open Access Journals (Sweden)

    Matthew R. McCurry

    2015-06-01

    Full Text Available The reliability of finite element analysis (FEA in biomechanical investigations depends upon understanding the influence of model assumptions. In producing finite element models, surface mesh resolution is influenced by the resolution of input geometry, and influences the resolution of the ensuing solid mesh used for numerical analysis. Despite a large number of studies incorporating sensitivity studies of the effects of solid mesh resolution there has not yet been any investigation into the effect of surface mesh resolution upon results in a comparative context. Here we use a dataset of crocodile crania to examine the effects of surface resolution on FEA results in a comparative context. Seven high-resolution surface meshes were each down-sampled to varying degrees while keeping the resulting number of solid elements constant. These models were then subjected to bite and shake load cases using finite element analysis. The results show that incremental decreases in surface resolution can result in fluctuations in strain magnitudes, but that it is possible to obtain stable results using lower resolution surface in a comparative FEA study. As surface mesh resolution links input geometry with the resulting solid mesh, the implication of these results is that low resolution input geometry and solid meshes may provide valid results in a comparative context.

  17. Global sensitivity analysis using emulators, with an example analysis of large fire plumes based on FDS simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kelsey, Adrian [Health and Safety Laboratory, Harpur Hill, Buxton (United Kingdom)

    2015-12-15

    Laboratory (SNL) in 2009. At the largest LNG release rate the flames did not cover the entire area of the LNG spill, this behaviour had not been observed in previous large-scale experiments. Also the height of the flames was also greater than expected from previous large-scale tests. One possible explanation for the observed behaviour is that in this very large-scale release the speed at which air and fuel vapour was drawn into the fire exceeded the flame speed. Therefore the flames could not propagate upwind to ignite the whole surface of the LNG pool. Fuel vapour from the unignited region, drawn into the fire, may also account for the higher flame height. A global sensitivity analysis allows the influence of uncertain parameters on the quantities of interest to be examined. This publication and the work it describes were funded by the Health and Safety Executive (HSE). Its contents, including any opinions and/or conclusions expressed, are those of the authors alone and do not necessarily reflect HSE policy.

  18. Extended forward sensitivity analysis of one-dimensional isothermal flow

    International Nuclear Information System (INIS)

    Johnson, M.; Zhao, H.

    2013-01-01

    Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)

  19. Sensitivity analysis of EQ3

    International Nuclear Information System (INIS)

    Horwedel, J.E.; Wright, R.Q.; Maerker, R.E.

    1990-01-01

    A sensitivity analysis of EQ3, a computer code which has been proposed to be used as one link in the overall performance assessment of a national high-level waste repository, has been performed. EQ3 is a geochemical modeling code used to calculate the speciation of a water and its saturation state with respect to mineral phases. The model chosen for the sensitivity analysis is one which is used as a test problem in the documentation of the EQ3 code. Sensitivities are calculated using both the CHAIN and ADGEN options of the GRESS code compiled under G-float FORTRAN on the VAX/VMS and verified by perturbation runs. The analyses were performed with a preliminary Version 1.0 of GRESS which contains several new algorithms that significantly improve the application of ADGEN. Use of ADGEN automates the implementation of the well-known adjoint technique for the efficient calculation of sensitivities of a given response to all the input data. Application of ADGEN to EQ3 results in the calculation of sensitivities of a particular response to 31,000 input parameters in a run time of only 27 times that of the original model. Moreover, calculation of the sensitivities for each additional response increases this factor by only 2.5 percent. This compares very favorably with a running-time factor of 31,000 if direct perturbation runs were used instead. 6 refs., 8 tabs

  20. Modelling pesticides volatilisation in greenhouses: Sensitivity analysis of a modified PEARL model.

    Science.gov (United States)

    Houbraken, Michael; Doan Ngoc, Kim; van den Berg, Frederik; Spanoghe, Pieter

    2017-12-01

    The application of the existing PEARL model was extended to include estimations of the concentration of crop protection products in greenhouse (indoor) air due to volatilisation from the plant surface. The model was modified to include the processes of ventilation of the greenhouse air to the outside atmosphere and transformation in the air. A sensitivity analysis of the model was performed by varying selected input parameters on a one-by-one basis and comparing the model outputs with the outputs of the reference scenarios. The sensitivity analysis indicates that - in addition to vapour pressure - the model had the highest ratio of variation for the rate ventilation rate and thickness of the boundary layer on the day of application. On the days after application, competing processes, degradation and uptake in the plant, becomes more important. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Procedures for uncertainty and sensitivity analysis in repository performance assessment

    International Nuclear Information System (INIS)

    Poern, K.; Aakerlund, O.

    1985-10-01

    The objective of the project was mainly a literature study of available methods for the treatment of parameter uncertainty propagation and sensitivity aspects in complete models such as those concerning geologic disposal of radioactive waste. The study, which has run parallel with the development of a code package (PROPER) for computer assisted analysis of function, also aims at the choice of accurate, cost-affective methods for uncertainty and sensitivity analysis. Such a choice depends on several factors like the number of input parameters, the capacity of the model and the computer reresources required to use the model. Two basic approaches are addressed in the report. In one of these the model of interest is directly simulated by an efficient sampling technique to generate an output distribution. Applying the other basic method the model is replaced by an approximating analytical response surface, which is then used in the sampling phase or in moment matching to generate the output distribution. Both approaches are illustrated by simple examples in the report. (author)

  2. SENSIT: a cross-section and design sensitivity and uncertainty analysis code

    International Nuclear Information System (INIS)

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE

  3. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  4. Estimation and analysis of the sensitivity of monoenergetic electron radiography of composite materials with fluctuating composition

    International Nuclear Information System (INIS)

    Rudenko, V.N.; Yunda, N.T.

    1978-01-01

    A sensitivity analysis of the electron defectoscopy method for composite materials with fluctuating composition has been carried out. Quantitative evaluations of the testing sensitivity depending on inspection conditions have been obtained, and calculations of the instrumental error are shown. Based on numerical calculations, a comparison of error has been carried out between high-energy electron and X-ray testings. It is shown that when testing composite materials with a surface density of up to 7-10 g/cm 2 , the advantage of the electron defectoscopy method as compared to the X-ray one is the higher sensitivity and lower instrumental error. The advantage of the electron defectoscopy method over the X-ray one as regards the sensitivity is greater when a light-atom component is predomenant in the composition. A monoenergetic electron beam from a betatron with an energy of up to 30 MeV should be used for testing materials with a surface density of up to 15 g/cm 2

  5. Sensitivity analysis for heat diffusion in a fin on a nuclear fuel element

    International Nuclear Information System (INIS)

    Tito, Max Werner de Carvalho

    2001-11-01

    The modern thermal systems generally present a growing complexity, as is in the case of nuclear power plants. It seems that is necessary the use of complex computation and mathematical tools in order to increase the efficiency of the operations, reduce costs and maximize profits while maintaining the integrity of its components. The use of sensitivity calculations plays an important role in this process providing relevant information regarding the resultant influence of variation or perturbation of its parameters as the system works. This technique is better known as sensitivity analysis and through its use makes possible the understanding of the effects of the parameters, which are fundamental for the project preparation, and for the development of preventive and corrective handling measurements of many pieces of equipment of modern engineering. The sensitivity calculation methodology is based generally on the response surface technique (graphic description of the functions of interest based in the results obtained from the system parameter variation). This method presents a lot of disadvantages and sometimes is even impracticable since many parameters can cause alterations or perturbations to the system and the model to analyse it can be very complex as well. The utilization of perturbative methods result appropriate as a practical solution to this problem especially in the presence of complex equations. Also it reduces the resultant computational calculus time considerably. The use of these methods becomes an essential tool to simplify the sensitivity analysis. In this dissertation, the differential perturbative method is applied in a heat conduction problem within a thermal system, made up of a one-dimensional circumferential fin on a nuclear fuel element. The fins are used to extend the thermal surfaces where convection occurs; thus increasing the heat transfer to many thermal pieces of equipment in order to obtain better results. The finned claddings are

  6. A sensitivity analysis of the WIPP disposal room model: Phase 1

    Energy Technology Data Exchange (ETDEWEB)

    Labreche, D.A.; Beikmann, M.A. [RE/SPEC, Inc., Albuquerque, NM (United States); Osnes, J.D. [RE/SPEC, Inc., Rapid City, SD (United States); Butcher, B.M. [Sandia National Labs., Albuquerque, NM (United States)

    1995-07-01

    The WIPP Disposal Room Model (DRM) is a numerical model with three major components constitutive models of TRU waste, crushed salt backfill, and intact halite -- and several secondary components, including air gap elements, slidelines, and assumptions on symmetry and geometry. A sensitivity analysis of the Disposal Room Model was initiated on two of the three major components (waste and backfill models) and on several secondary components as a group. The immediate goal of this component sensitivity analysis (Phase I) was to sort (rank) model parameters in terms of their relative importance to model response so that a Monte Carlo analysis on a reduced set of DRM parameters could be performed under Phase II. The goal of the Phase II analysis will be to develop a probabilistic definition of a disposal room porosity surface (porosity, gas volume, time) that could be used in WIPP Performance Assessment analyses. This report documents a literature survey which quantifies the relative importance of the secondary room components to room closure, a differential analysis of the creep consolidation model and definition of a follow-up Monte Carlo analysis of the model, and an analysis and refitting of the waste component data on which a volumetric plasticity model of TRU drum waste is based. A summary, evaluation of progress, and recommendations for future work conclude the report.

  7. Phase sensitive spectral domain interferometry for label free biomolecular interaction analysis and biosensing applications

    Science.gov (United States)

    Chirvi, Sajal

    Biomolecular interaction analysis (BIA) plays vital role in wide variety of fields, which include biomedical research, pharmaceutical industry, medical diagnostics, and biotechnology industry. Study and quantification of interactions between natural biomolecules (proteins, enzymes, DNA) and artificially synthesized molecules (drugs) is routinely done using various labeled and label-free BIA techniques. Labeled BIA (Chemiluminescence, Fluorescence, Radioactive) techniques suffer from steric hindrance of labels on interaction site, difficulty of attaching labels to molecules, higher cost and time of assay development. Label free techniques with real time detection capabilities have demonstrated advantages over traditional labeled techniques. The gold standard for label free BIA is surface Plasmon resonance (SPR) that detects and quantifies the changes in refractive index of the ligand-analyte complex molecule with high sensitivity. Although SPR is a highly sensitive BIA technique, it requires custom-made sensor chips and is not well suited for highly multiplexed BIA required in high throughput applications. Moreover implementation of SPR on various biosensing platforms is limited. In this research work spectral domain phase sensitive interferometry (SD-PSI) has been developed for label-free BIA and biosensing applications to address limitations of SPR and other label free techniques. One distinct advantage of SD-PSI compared to other label-free techniques is that it does not require use of custom fabricated biosensor substrates. Laboratory grade, off-the-shelf glass or plastic substrates of suitable thickness with proper surface functionalization are used as biosensor chips. SD-PSI is tested on four separate BIA and biosensing platforms, which include multi-well plate, flow cell, fiber probe with integrated optics and fiber tip biosensor. Sensitivity of 33 ng/ml for anti-IgG is achieved using multi-well platform. Principle of coherence multiplexing for multi

  8. Surface Sensitive Bolometer for the CUORE background reduction

    International Nuclear Information System (INIS)

    Pedretti, M.; Foggetta, L.; Giuliani, A.; Nones, C.; Sangiorgio, S.

    2005-01-01

    The most critical point of the CUORE Project [CUORE Proposal, see the web page: http://crio.mib.infn.it/wig] is the background level (BKGL) in the neutrinoless double beta decay (0νββ) region that is dominated by degraded particles coming from materials that face the detectors. Surface Sensitive Bolometers (SSBs) have been developed in order to reduce the BKGL by means of an active background discrimination. The principle of this technique and the first results obtained are briefly described in the following paper

  9. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  10. Multiple predictor smoothing methods for sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  11. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  12. Modification of rubber surface by UV surface grafting

    International Nuclear Information System (INIS)

    Shanmugharaj, A.M.; Kim, Jin Kuk; Ryu, Sung Hun

    2006-01-01

    Rubber surface is subjected to ultraviolet radiation (UV) in the presence of allylamine and radiation sensitizer benzophenone (BP). Fourier transform infrared spectral studies reveal the presence of allylamine on the surface. The presence of irregular needle shapes on the surface as observed in scanning electron micrographs also confirms the polymerized allylamine on the surface. Allylamine coatings have been further confirmed from atomic force microscopy (AFM) analysis. Thermogravimetric analysis (TGA) reveals that allylamine coating on the rubber surface lowers the thermal degradation rate. The contact angle between the water and rubber surface decreases for the modified rubber surface confirming the surface modification due to UV surface grafting

  13. About the use of rank transformation in sensitivity analysis of model output

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Sobol', Ilya M

    1995-01-01

    Rank transformations are frequently employed in numerical experiments involving a computational model, especially in the context of sensitivity and uncertainty analyses. Response surface replacement and parameter screening are tasks which may benefit from a rank transformation. Ranks can cope with nonlinear (albeit monotonic) input-output distributions, allowing the use of linear regression techniques. Rank transformed statistics are more robust, and provide a useful solution in the presence of long tailed input and output distributions. As is known to practitioners, care must be employed when interpreting the results of such analyses, as any conclusion drawn using ranks does not translate easily to the original model. In the present note an heuristic approach is taken, to explore, by way of practical examples, the effect of a rank transformation on the outcome of a sensitivity analysis. An attempt is made to identify trends, and to correlate these effects to a model taxonomy. Employing sensitivity indices, whereby the total variance of the model output is decomposed into a sum of terms of increasing dimensionality, we show that the main effect of the rank transformation is to increase the relative weight of the first order terms (the 'main effects'), at the expense of the 'interactions' and 'higher order interactions'. As a result the influence of those parameters which influence the output mostly by way of interactions may be overlooked in an analysis based on the ranks. This difficulty increases with the dimensionality of the problem, and may lead to the failure of a rank based sensitivity analysis. We suggest that the models can be ranked, with respect to the complexity of their input-output relationship, by mean of an 'Association' index I y . I y may complement the usual model coefficient of determination R y 2 as a measure of model complexity for the purpose of uncertainty and sensitivity analysis

  14. LBLOCA sensitivity analysis using meta models

    International Nuclear Information System (INIS)

    Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.

    2014-01-01

    This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)

  15. Sensitivity analysis in life cycle assessment

    NARCIS (Netherlands)

    Groen, E.A.; Heijungs, R.; Bokkers, E.A.M.; Boer, de I.J.M.

    2014-01-01

    Life cycle assessments require many input parameters and many of these parameters are uncertain; therefore, a sensitivity analysis is an essential part of the final interpretation. The aim of this study is to compare seven sensitivity methods applied to three types of case stud-ies. Two

  16. Highly sensitive nano-porous lattice biosensor based on localized surface plasmon resonance and interference.

    Science.gov (United States)

    Yeom, Se-Hyuk; Kim, Ok-Geun; Kang, Byoung-Ho; Kim, Kyu-Jin; Yuan, Heng; Kwon, Dae-Hyuk; Kim, Hak-Rin; Kang, Shin-Won

    2011-11-07

    We propose a design for a highly sensitive biosensor based on nanostructured anodized aluminum oxide (AAO) substrates. A gold-deposited AAO substrate exhibits both optical interference and localized surface plasmon resonance (LSPR). In our sensor, application of these disparate optical properties overcomes problems of limited sensitivity, selectivity, and dynamic range seen in similar biosensors. We fabricated uniform periodic nanopore lattice AAO templates by two-step anodizing and assessed their suitability for application in biosensors by characterizing the change in optical response on addition of biomolecules to the AAO template. To determine the suitability of such structures for biosensing applications, we immobilized a layer of C-reactive protein (CRP) antibody on a gold coating atop an AAO template. We then applied a CRP antigen (Ag) atop the immobilized antibody (Ab) layer. The shift in reflectance is interpreted as being caused by the change in refractive index with membrane thickness. Our results confirm that our proposed AAO-based biosensor is highly selective toward detection of CRP antigen, and can measure a change in CRP antigen concentration of 1 fg/ml. This method can provide a simple, fast, and sensitive analysis for protein detection in real-time.

  17. Sensitivity analysis for matched pair analysis of binary data: From worst case to average case analysis.

    Science.gov (United States)

    Hasegawa, Raiden; Small, Dylan

    2017-12-01

    In matched observational studies where treatment assignment is not randomized, sensitivity analysis helps investigators determine how sensitive their estimated treatment effect is to some unmeasured confounder. The standard approach calibrates the sensitivity analysis according to the worst case bias in a pair. This approach will result in a conservative sensitivity analysis if the worst case bias does not hold in every pair. In this paper, we show that for binary data, the standard approach can be calibrated in terms of the average bias in a pair rather than worst case bias. When the worst case bias and average bias differ, the average bias interpretation results in a less conservative sensitivity analysis and more power. In many studies, the average case calibration may also carry a more natural interpretation than the worst case calibration and may also allow researchers to incorporate additional data to establish an empirical basis with which to calibrate a sensitivity analysis. We illustrate this with a study of the effects of cellphone use on the incidence of automobile accidents. Finally, we extend the average case calibration to the sensitivity analysis of confidence intervals for attributable effects. © 2017, The International Biometric Society.

  18. Sensitive determination of dopamine levels via surface-enhanced Raman scattering of Ag nanoparticle dimers.

    Science.gov (United States)

    Yu, Xiantong; He, XiaoXiao; Yang, Taiqun; Zhao, Litao; Chen, Qichen; Zhang, Sanjun; Chen, Jinquan; Xu, Jianhua

    2018-01-01

    Dopamine (DA) is an important neurotransmitter in the hypothalamus and pituitary gland, which can produce a direct influence on mammals' emotions in midbrain. Additionally, the level of DA is highly related with some important neurologic diseases such as schizophrenia, Parkinson, and Huntington's diseases, etc. In light of the important roles that DA plays in the disease modulation, it is of considerable significance to develop a sensitive and reproducible approach for monitoring DA. The objective of this study was to develop an efficient approach to quantitatively monitor the level of DA using Ag nanoparticle (NP) dimers and enhanced Raman spectroscopy. Ag NP dimers were synthesized for the sensitive detection of DA via surface-enhanced Raman scattering (SERS). Citrate was used as both the capping agent of NPs and sensing agent to DA, which is self-assembled on the surface of Ag NP dimers by reacting with the surface carboxyl group to form a stable amide bond. To improve accuracy and precision, the multiplicative effects model for surface-enhanced Raman spectroscopy was utilized to analyze the SERS assays. A low limits of detection (LOD) of 20 pM and a wide linear response range from 30 pM to 300 nM were obtained for DA quantitative detection. The SERS enhancement factor was theoretically valued at approximately 10 7 by discrete dipole approximation. DA was self-assembled on the citrate capped surface of Ag NPs dimers through the amide bond. The adsorption energy was estimated to be 256 KJ/mol using the Langmuir isotherm model. The density functional theory was used to simulate the spectral characteristics of SERS during the adsorption of DA on the surface of the Ag dimers. Furthermore, to improve the accuracy and precision of quantitative analysis of SERS assays with a multiplicative effects model for surface-enhanced Raman spectroscopy. A LOD of 20 pM DA-level was obtained, and the linear response ranged from 30 pM to 300 nM for quantitative DA detection. The

  19. High order depletion sensitivity analysis

    International Nuclear Information System (INIS)

    Naguib, K.; Adib, M.; Morcos, H.N.

    2002-01-01

    A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations

  20. Assessment of wave propagation on surfaces of crystalline lens with phase sensitive optical coherence tomography

    International Nuclear Information System (INIS)

    Manapuram, R K; Larin, K V; Baranov, S A; Manne, V G R; Mashiatulla, M; Sudheendran, N; Aglyamov, S; Emelianov, S

    2011-01-01

    We propose a real-time technique based on phase-sensitive swept source optical coherence tomography (PhS-SSOCT) modality for noninvasive quantification of very small optical path length changes produced on the surface of a mouse crystalline lens. Propagation of submicron mechanical waves on the surface of the lens was induced by periodic mechanical stimulation. Obtained results demonstrate that the described method is capable of detecting minute damped vibrations with amplitudes as small as 30 nanometers on the lens surface and hence, PhS-SSOCT could be potentially used to assess biomechanical properties of a crystalline lens with high accuracy and sensitivity

  1. The observed sensitivity of the global hydrological cycle to changes in surface temperature

    International Nuclear Information System (INIS)

    Arkin, Phillip A; Janowiak, John; Smith, Thomas M; Sapiano, Mathew R P

    2010-01-01

    Climate models project large changes in global surface temperature in coming decades that are expected to be accompanied by significant changes in the global hydrological cycle. Validation of model simulations is essential to support their use in decision making, but observing the elements of the hydrological cycle is challenging, and model-independent global data sets exist only for precipitation. We compute the sensitivity of the global hydrological cycle to changes in surface temperature using available global precipitation data sets and compare the results against the sensitivities derived from model simulations of 20th century climate. The implications of the results for the global climate observing system are discussed.

  2. Benchmarking sensitivity of biophysical processes to leaf area changes in land surface models

    Science.gov (United States)

    Forzieri, Giovanni; Duveiller, Gregory; Georgievski, Goran; Li, Wei; Robestson, Eddy; Kautz, Markus; Lawrence, Peter; Ciais, Philippe; Pongratz, Julia; Sitch, Stephen; Wiltshire, Andy; Arneth, Almut; Cescatti, Alessandro

    2017-04-01

    Land surface models (LSM) are widely applied as supporting tools for policy-relevant assessment of climate change and its impact on terrestrial ecosystems, yet knowledge of their performance skills in representing the sensitivity of biophysical processes to changes in vegetation density is still limited. This is particularly relevant in light of the substantial impacts on regional climate associated with the changes in leaf area index (LAI) following the observed global greening. Benchmarking LSMs on the sensitivity of the simulated processes to vegetation density is essential to reduce their uncertainty and improve the representation of these effects. Here we present a novel benchmark system to assess model capacity in reproducing land surface-atmosphere energy exchanges modulated by vegetation density. Through a collaborative effort of different modeling groups, a consistent set of land surface energy fluxes and LAI dynamics has been generated from multiple LSMs, including JSBACH, JULES, ORCHIDEE, CLM4.5 and LPJ-GUESS. Relationships of interannual variations of modeled surface fluxes to LAI changes have been analyzed at global scale across different climatological gradients and compared with satellite-based products. A set of scoring metrics has been used to assess the overall model performances and a detailed analysis in the climate space has been provided to diagnose possible model errors associated to background conditions. Results have enabled us to identify model-specific strengths and deficiencies. An overall best performing model does not emerge from the analyses. However, the comparison with other models that work better under certain metrics and conditions indicates that improvements are expected to be potentially achievable. A general amplification of the biophysical processes mediated by vegetation is found across the different land surface schemes. Grasslands are characterized by an underestimated year-to-year variability of LAI in cold climates

  3. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  4. Sensitive Detection of Biomolecules by Surface Enhanced Raman Scattering using Plant Leaves as Natural Substrates

    Directory of Open Access Journals (Sweden)

    Sharma Vipul

    2017-01-01

    Full Text Available Detection of biomolecules is highly important for biomedical and other biological applications. Although several methods exist for the detection of biomolecules, surface enhanced Raman scattering (SERS has a unique role in greatly enhancing the sensitivity. In this work, we have demonstrated the use of natural plant leaves as facile, low cost and eco-friendly SERS substrates for the sensitive detection of biomolecules. Specifically, we have investigated the influence of surface topography of five different plant leaf based substrates, deposited with Au, on the SERS performance by using L-cysteine as a model biomolecule. In addition, we have also compared the effect of sputter deposition of Au thin film with dropcast deposition of Au nanoparticles on the leaf substrates. Our results indicate that L-cysteine could be detected with high sensitivity using these plant leaf based substrates and the leaf possessing hierarchical micro/nanostructures on its surface shows higher SERS enhancement compared to a leaf having a nearplanar surface. Furthermore, leaves with drop-casted Au nanoparticle clusters performed better than the leaves sputter deposited with a thin Au film.

  5. Sensitivity analysis of brain morphometry based on MRI-derived surface models

    Science.gov (United States)

    Klein, Gregory J.; Teng, Xia; Schoenemann, P. T.; Budinger, Thomas F.

    1998-07-01

    Quantification of brain structure is important for evaluating changes in brain size with growth and aging and for characterizing neurodegeneration disorders. Previous quantification efforts using ex vivo techniques suffered considerable error due to shrinkage of the cerebrum after extraction from the skull, deformation of slices during sectioning, and numerous other factors. In vivo imaging studies of brain anatomy avoid these problems and allow repetitive studies following progression of brain structure changes due to disease or natural processes. We have developed a methodology for obtaining triangular mesh models of the cortical surface from MRI brain datasets. The cortex is segmented from nonbrain tissue using a 2D region-growing technique combined with occasional manual edits. Once segmented, thresholding and image morphological operations (erosions and openings) are used to expose the regions between adjacent surfaces in deep cortical folds. A 2D region- following procedure is then used to find a set of contours outlining the cortical boundary on each slice. The contours on all slices are tiled together to form a closed triangular mesh model approximating the cortical surface. This model can be used for calculation of cortical surface area and volume, as well as other parameters of interest. Except for the initial segmentation of the cortex from the skull, the technique is automatic and requires only modest computation time on modern workstations. Though the use of image data avoids many of the pitfalls of ex vivo and sectioning techniques, our MRI-based technique is still vulnerable to errors that may impact the accuracy of estimated brain structure parameters. Potential inaccuracies include segmentation errors due to incorrect thresholding, missed deep sulcal surfaces, falsely segmented holes due to image noise and surface tiling artifacts. The focus of this paper is the characterization of these errors and how they affect measurements of cortical surface

  6. Frontier Assignment for Sensitivity Analysis of Data Envelopment Analysis

    Science.gov (United States)

    Naito, Akio; Aoki, Shingo; Tsuji, Hiroshi

    To extend the sensitivity analysis capability for DEA (Data Envelopment Analysis), this paper proposes frontier assignment based DEA (FA-DEA). The basic idea of FA-DEA is to allow a decision maker to decide frontier intentionally while the traditional DEA and Super-DEA decide frontier computationally. The features of FA-DEA are as follows: (1) provides chances to exclude extra-influential DMU (Decision Making Unit) and finds extra-ordinal DMU, and (2) includes the function of the traditional DEA and Super-DEA so that it is able to deal with sensitivity analysis more flexibly. Simple numerical study has shown the effectiveness of the proposed FA-DEA and the difference from the traditional DEA.

  7. High Sensitivity and High Detection Specificity of Gold-Nanoparticle-Grafted Nanostructured Silicon Mass Spectrometry for Glucose Analysis.

    Science.gov (United States)

    Tsao, Chia-Wen; Yang, Zhi-Jie

    2015-10-14

    Desorption/ionization on silicon (DIOS) is a high-performance matrix-free mass spectrometry (MS) analysis method that involves using silicon nanostructures as a matrix for MS desorption/ionization. In this study, gold nanoparticles grafted onto a nanostructured silicon (AuNPs-nSi) surface were demonstrated as a DIOS-MS analysis approach with high sensitivity and high detection specificity for glucose detection. A glucose sample deposited on the AuNPs-nSi surface was directly catalyzed to negatively charged gluconic acid molecules on a single AuNPs-nSi chip for MS analysis. The AuNPs-nSi surface was fabricated using two electroless deposition steps and one electroless etching step. The effects of the electroless fabrication parameters on the glucose detection efficiency were evaluated. Practical application of AuNPs-nSi MS glucose analysis in urine samples was also demonstrated in this study.

  8. Sensitivity analysis in optimization and reliability problems

    International Nuclear Information System (INIS)

    Castillo, Enrique; Minguez, Roberto; Castillo, Carmen

    2008-01-01

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods

  9. Sensitivity analysis in optimization and reliability problems

    Energy Technology Data Exchange (ETDEWEB)

    Castillo, Enrique [Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. Castros s/n., 39005 Santander (Spain)], E-mail: castie@unican.es; Minguez, Roberto [Department of Applied Mathematics, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: roberto.minguez@uclm.es; Castillo, Carmen [Department of Civil Engineering, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: mariacarmen.castillo@uclm.es

    2008-12-15

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods.

  10. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  11. In-vacuum scattered light reduction with black cupric oxide surfaces for sensitive fluorescence detection.

    Science.gov (United States)

    Norrgard, E B; Sitaraman, N; Barry, J F; McCarron, D J; Steinecker, M H; DeMille, D

    2016-05-01

    We demonstrate a simple and easy method for producing low-reflectivity surfaces that are ultra-high vacuum compatible, may be baked to high temperatures, and are easily applied even on complex surface geometries. Black cupric oxide (CuO) surfaces are chemically grown in minutes on any copper surface, allowing for low-cost, rapid prototyping, and production. The reflective properties are measured to be comparable to commercially available products for creating optically black surfaces. We describe a vacuum apparatus which uses multiple blackened copper surfaces for sensitive, low-background detection of molecules using laser-induced fluorescence.

  12. SENSITIVITY OF BODY SWAY PARAMETERS DURING QUIET STANDING TO MANIPULATION OF SUPPORT SURFACE SIZE

    Directory of Open Access Journals (Sweden)

    Sarabon Nejc

    2010-09-01

    Full Text Available The centre of pressure (COP movement during stance maintenance on a stable surface is commonly used to describe and evaluate static balance. The aim of our study was to test sensitivity of individual COP parameters to different stance positions which were used to address size specific changes in the support surface. Twenty-nine subjects participated in the study. They carried out three 60-second repetitions of each of the five balance tasks (parallel stance, semi-tandem stance, tandem stance, contra-tandem stance, single leg stance. Using the force plate, the monitored parameters included the total COP distance, the distance covered in antero-posterior and medio-lateral directions, the maximum oscillation amplitude in antero-posterior and medio-lateral directions, the total frequency of oscillation, as well as the frequency of oscillation in antero-posterior and medio-lateral directions. The parameters which describe the total COP distance were the most sensitive to changes in the balance task, whereas the frequency of oscillation proved to be sensitive to a slightly lesser extent. Reductions in the support surface size in each of the directions resulted in proportional changes of antero-posterior and medio- lateral directions. The frequency of oscillation did not increase evenly with the increase in the level of difficulty of the balance task, but reached a certain value, above which it did not increase. Our study revealed the monitored parameters of the COP to be sensitive to the support surface size manipulations. The results of the study provide an important source for clinical and research use of the body sway measurements.

  13. Thin film surface reconstruction analysis

    Energy Technology Data Exchange (ETDEWEB)

    Imperatori, P [CNR, Monterotondo Stazione, Rome (Italy). Istituto di Chimica dei materiali

    1996-09-01

    The study of the atomic structure of surfaces and interfaces is a fundamental step in the knowledge and the development of new materials. Among the several surface-sensitive techniques employed to characterise the atomic arrangements, grazing incidence x-ray diffraction (GIXD) is one of the most powerful. With a simple data treatment, based on the kinematical theory, and using the classical methods of x-ray bulk structure determination, it gives the atomic positions of atoms at a surface or an interface and the atomic displacements of subsurface layers for a complete determination of the structure. In this paper the main features of the technique will be briefly reviewed and selected of application to semiconductor and metal surfaces will be discussed.

  14. The role of sensitivity analysis in assessing uncertainty

    International Nuclear Information System (INIS)

    Crick, M.J.; Hill, M.D.

    1987-01-01

    Outside the specialist world of those carrying out performance assessments considerable confusion has arisen about the meanings of sensitivity analysis and uncertainty analysis. In this paper we attempt to reduce this confusion. We then go on to review approaches to sensitivity analysis within the context of assessing uncertainty, and to outline the types of test available to identify sensitive parameters, together with their advantages and disadvantages. The views expressed in this paper are those of the authors; they have not been formally endorsed by the National Radiological Protection Board and should not be interpreted as Board advice

  15. Assessing uncertainty and sensitivity of model parameterizations and parameters in WRF affecting simulated surface fluxes and land-atmosphere coupling over the Amazon region

    Science.gov (United States)

    Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.

    2016-12-01

    This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for

  16. Rapid and sensitive detection of synthetic cannabinoids AMB-FUBINACA and α-PVP using surface enhanced Raman scattering (SERS)

    Science.gov (United States)

    Islam, Syed K.; Cheng, Yin Pak; Birke, Ronald L.; Green, Omar; Kubic, Thomas; Lombardi, John R.

    2018-04-01

    The application of surface enhanced Raman scattering (SERS) has been reported as a fast and sensitive analytical method in the trace detection of the two most commonly known synthetic cannabinoids AMB-FUBINACA and alpha-pyrrolidinovalerophenone (α-PVP). FUBINACA and α-PVP are two of the most dangerous synthetic cannabinoids which have been reported to cause numerous deaths in the United States. While instruments such as GC-MS, LC-MS have been traditionally recognized as analytical tools for the detection of these synthetic drugs, SERS has been recently gaining ground in the analysis of these synthetic drugs due to its sensitivity in trace analysis and its effectiveness as a rapid method of detection. This present study shows the limit of detection of a concentration as low as picomolar for AMB-FUBINACA while for α-PVP, the limit of detection is in nanomolar concentration using SERS.

  17. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...

  18. Surface immobilized antibody orientation determined using ToF-SIMS and multivariate analysis.

    Science.gov (United States)

    Welch, Nicholas G; Madiona, Robert M T; Payten, Thomas B; Easton, Christopher D; Pontes-Braz, Luisa; Brack, Narelle; Scoble, Judith A; Muir, Benjamin W; Pigram, Paul J

    2017-06-01

    Antibody orientation at solid phase interfaces plays a critical role in the sensitive detection of biomolecules during immunoassays. Correctly oriented antibodies with solution-facing antigen binding regions have improved antigen capture as compared to their randomly oriented counterparts. Direct characterization of oriented proteins with surface analysis methods still remains a challenge however surface sensitive techniques such as Time-of-Flight Secondary Ion Mass Spectrometry (ToF-SIMS) provide information-rich data that can be used to probe antibody orientation. Diethylene glycol dimethyl ether plasma polymers (DGpp) functionalized with chromium (DGpp+Cr) have improved immunoassay performance that is indicative of preferential antibody orientation. Herein, ToF-SIMS data from proteolytic fragments of anti-EGFR antibody bound to DGpp and DGpp+Cr are used to construct artificial neural network (ANN) and principal component analysis (PCA) models indicative of correctly oriented systems. Whole antibody samples (IgG) test against each of the models indicated preferential antibody orientation on DGpp+Cr. Cross-reference between ANN and PCA models yield 20 mass fragments associated with F(ab') 2 region representing correct orientation, and 23 mass fragments associated with the Fc region representing incorrect orientation. Mass fragments were then compared to amino acid fragments and amino acid composition in F(ab') 2 and Fc regions. A ratio of the sum of the ToF-SIMS ion intensities from the F(ab') 2 fragments to the Fc fragments demonstrated a 50% increase in intensity for IgG on DGpp+Cr as compared to DGpp. The systematic data analysis methodology employed herein offers a new approach for the investigation of antibody orientation applicable to a range of substrates. Controlled orientation of antibodies at solid phases is critical for maximizing antigen detection in biosensors and immunoassays. Surface-sensitive techniques (such as ToF-SIMS), capable of direct

  19. Probabilistic sensitivity analysis in health economics.

    Science.gov (United States)

    Baio, Gianluca; Dawid, A Philip

    2015-12-01

    Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. © The Author(s) 2011.

  20. TOLERANCE SENSITIVITY ANALYSIS: THIRTY YEARS LATER

    Directory of Open Access Journals (Sweden)

    Richard E. Wendell

    2010-12-01

    Full Text Available Tolerance sensitivity analysis was conceived in 1980 as a pragmatic approach to effectively characterize a parametric region over which objective function coefficients and right-hand-side terms in linear programming could vary simultaneously and independently while maintaining the same optimal basis. As originally proposed, the tolerance region corresponds to the maximum percentage by which coefficients or terms could vary from their estimated values. Over the last thirty years the original results have been extended in a number of ways and applied in a variety of applications. This paper is a critical review of tolerance sensitivity analysis, including extensions and applications.

  1. Sensitivity analysis for missing data in regulatory submissions.

    Science.gov (United States)

    Permutt, Thomas

    2016-07-30

    The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  2. A sensitivity analysis for a thermomechanical model of the Antarctic ice sheet and ice shelves

    Science.gov (United States)

    Baratelli, F.; Castellani, G.; Vassena, C.; Giudici, M.

    2012-04-01

    The outcomes of an ice sheet model depend on a number of parameters and physical quantities which are often estimated with large uncertainty, because of lack of sufficient experimental measurements in such remote environments. Therefore, the efforts to improve the accuracy of the predictions of ice sheet models by including more physical processes and interactions with atmosphere, hydrosphere and lithosphere can be affected by the inaccuracy of the fundamental input data. A sensitivity analysis can help to understand which are the input data that most affect the different predictions of the model. In this context, a finite difference thermomechanical ice sheet model based on the Shallow-Ice Approximation (SIA) and on the Shallow-Shelf Approximation (SSA) has been developed and applied for the simulation of the evolution of the Antarctic ice sheet and ice shelves for the last 200 000 years. The sensitivity analysis of the model outcomes (e.g., the volume of the ice sheet and of the ice shelves, the basal melt rate of the ice sheet, the mean velocity of the Ross and Ronne-Filchner ice shelves, the wet area at the base of the ice sheet) with respect to the model parameters (e.g., the basal sliding coefficient, the geothermal heat flux, the present-day surface accumulation and temperature, the mean ice shelves viscosity, the melt rate at the base of the ice shelves) has been performed by computing three synthetic numerical indices: two local sensitivity indices and a global sensitivity index. Local sensitivity indices imply a linearization of the model and neglect both non-linear and joint effects of the parameters. The global variance-based sensitivity index, instead, takes into account the complete variability of the input parameters but is usually conducted with a Monte Carlo approach which is computationally very demanding for non-linear complex models. Therefore, the global sensitivity index has been computed using a development of the model outputs in a

  3. Experimental parameters for quantitative surface analysis by medium energy ion scattering, ch. 1

    International Nuclear Information System (INIS)

    Turkenburg, W.C.; Kersten, H.H.; Colenbrander, B.G.; Jongh, A.P. de; Saris, F.W.

    1976-01-01

    A new UHV chamber for surface and surface layer analysis by collision spectroscopy of backscattered ions at medium energies is described. Experimental parameters like energy, angular and depth resolution, crystal alignment and background pressure are discussed. Formulae based on the use of an electrostatic energy analyser show that the analysis can be quantitative. Effects of beam induced build-up of a hydro-carbon layer, sputter cleaning and creation of radiation damage have been investigated for Cu (110) and Ni (110). Detection sensitivity for Carbon, Oxygen and Sulfur on Cu and Ni has been found to be 0.2, 0.1 and 0.03 of a monolayer respectively

  4. Risk and sensitivity analysis in relation to external events

    International Nuclear Information System (INIS)

    Alzbutas, R.; Urbonas, R.; Augutis, J.

    2001-01-01

    This paper presents risk and sensitivity analysis of external events impacts on the safe operation in general and in particular the Ignalina Nuclear Power Plant safety systems. Analysis is based on the deterministic and probabilistic assumptions and assessment of the external hazards. The real statistic data are used as well as initial external event simulation. The preliminary screening criteria are applied. The analysis of external event impact on the NPP safe operation, assessment of the event occurrence, sensitivity analysis, and recommendations for safety improvements are performed for investigated external hazards. Such events as aircraft crash, extreme rains and winds, forest fire and flying parts of the turbine are analysed. The models are developed and probabilities are calculated. As an example for sensitivity analysis the model of aircraft impact is presented. The sensitivity analysis takes into account the uncertainty features raised by external event and its model. Even in case when the external events analysis show rather limited danger, the sensitivity analysis can determine the highest influence causes. These possible variations in future can be significant for safety level and risk based decisions. Calculations show that external events cannot significantly influence the safety level of the Ignalina NPP operation, however the events occurrence and propagation can be sufficiently uncertain.(author)

  5. Analysis of hepatitis B surface antigen (HBsAg) using high-sensitivity HBsAg assays in hepatitis B virus carriers in whom HBsAg seroclearance was confirmed by conventional assays.

    Science.gov (United States)

    Ozeki, Itaru; Nakajima, Tomoaki; Suii, Hirokazu; Tatsumi, Ryoji; Yamaguchi, Masakatsu; Kimura, Mutsuumi; Arakawa, Tomohiro; Kuwata, Yasuaki; Ohmura, Takumi; Hige, Shuhei; Karino, Yoshiyasu; Toyota, Joji

    2018-02-01

    We investigated the utility of high-sensitivity hepatitis B surface antigen (HBsAg) assays compared with conventional HBsAg assays. Using serum samples from 114 hepatitis B virus (HBV) carriers in whom HBsAg seroclearance was confirmed by conventional HBsAg assays (cut-off value, 0.05 IU/mL), the amount of HBsAg was re-examined by high-sensitivity HBsAg assays (cut-off value, 0.005 IU/mL). Cases negative for HBsAg in both assays were defined as consistent cases, and cases positive for HBsAg in the high-sensitivity HBsAg assay only were defined as discrepant cases. There were 55 (48.2%) discrepant cases, and the range of HBsAg titers determined by high-sensitivity HBsAg assays was 0.005-0.056 IU/mL. Multivariate analysis showed that the presence of nucleos(t)ide analog therapy, liver cirrhosis, and negative anti-HBs contributed to the discrepancies between the two assays. Cumulative anti-HBs positivity rates among discrepant cases were 12.7%, 17.2%, 38.8%, and 43.9% at baseline, 1 year, 3 years, and 5 years, respectively, whereas the corresponding rates among consistent cases were 50.8%, 56.0%, 61.7%, and 68.0%, respectively. Hepatitis B virus DNA negativity rates were 56.4% and 81.4% at baseline, 51.3% and 83.3% at 1 year, and 36.8% and 95.7% at 3 years, among discrepant and consistent cases, respectively. Hepatitis B surface antigen reversion was observed only in discrepant cases. Re-examination by high-sensitivity HBsAg assays revealed that HBsAg was positive in approximately 50% of cases. Cumulative anti-HBs seroconversion rates and HBV-DNA seroclearance rates were lower in these cases, suggesting a population at risk for HBsAg reversion. © 2017 The Japan Society of Hepatology.

  6. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  7. Sensitivity analysis and related analysis : A survey of statistical techniques

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical

  8. Highly sensitive BTX detection using surface functionalized QCM sensor

    Energy Technology Data Exchange (ETDEWEB)

    Bozkurt, Asuman Aşıkoğlu; Özdemir, Okan; Altındal, Ahmet, E-mail: altindal@yildiz.edu.tr [Department of Physics, Yildiz Technical University, Davutpasa, 34210 Istanbul (Turkey)

    2016-03-25

    A novel organic compound was designed and successfully synthesized for the fabrication of QCM based sensors to detect the low concentrations of BTX gases in indoor air. The effect of the long-range electron orbital delocalization on the BTX vapour sensing properties of azo-bridged Pcs based chemiresistor-type sensors have also been investigated in this work. The sensing behaviour of the film for the online detection of volatile organic solvent vapors was investigated by utilizing an AT-cut quartz crystal resonator. It was observed that the adsorption of the target molecules on the coating surface cause a reversible negative frequency shift of the resonator. Thus, a variety of solvent vapors can be detected by using the phthalocyanine film as sensitive coating, with sensitivity in the ppm range and response times in the order of several seconds depending on the molecular structure of the organic solvent.

  9. From analysis to surface

    DEFF Research Database (Denmark)

    Bemman, Brian; Meredith, David

    it with a “ground truth” analysis of the same music pro- duced by a human expert (see, in particular, [5]). In this paper, we explore the problem of generating an encoding of the musical surface of a work automatically from a systematic encoding of an analysis. The ability to do this depends on one having...... an effective (i.e., comput- able), correct and complete description of some aspect of the structure of the music. Generating the surface struc- ture of a piece from an analysis in this manner serves as a proof of the analysis' correctness, effectiveness and com- pleteness. We present a reductive analysis......In recent years, a significant body of research has focused on developing algorithms for computing analyses of mu- sical works automatically from encodings of these works' surfaces [3,4,7,10,11]. The quality of the output of such analysis algorithms is typically evaluated by comparing...

  10. Uncertainty Quantification and Regional Sensitivity Analysis of Snow-related Parameters in the Canadian LAnd Surface Scheme (CLASS)

    Science.gov (United States)

    Badawy, B.; Fletcher, C. G.

    2017-12-01

    The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.

  11. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the

  12. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Directory of Open Access Journals (Sweden)

    Georgios Arampatzis

    Full Text Available Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of

  13. Sensitivity of surface temperature to radiative forcing by contrail cirrus in a radiative-mixing model

    Directory of Open Access Journals (Sweden)

    U. Schumann

    2017-11-01

    Full Text Available Earth's surface temperature sensitivity to radiative forcing (RF by contrail cirrus and the related RF efficacy relative to CO2 are investigated in a one-dimensional idealized model of the atmosphere. The model includes energy transport by shortwave (SW and longwave (LW radiation and by mixing in an otherwise fixed reference atmosphere (no other feedbacks. Mixing includes convective adjustment and turbulent diffusion, where the latter is related to the vertical component of mixing by large-scale eddies. The conceptual study shows that the surface temperature sensitivity to given contrail RF depends strongly on the timescales of energy transport by mixing and radiation. The timescales are derived for steady layered heating (ghost forcing and for a transient contrail cirrus case. The radiative timescales are shortest at the surface and shorter in the troposphere than in the mid-stratosphere. Without mixing, a large part of the energy induced into the upper troposphere by radiation due to contrails or similar disturbances gets lost to space before it can contribute to surface warming. Because of the different radiative forcing at the surface and at top of atmosphere (TOA and different radiative heating rate profiles in the troposphere, the local surface temperature sensitivity to stratosphere-adjusted RF is larger for SW than for LW contrail forcing. Without mixing, the surface energy budget is more important for surface warming than the TOA budget. Hence, surface warming by contrails is smaller than suggested by the net RF at TOA. For zero mixing, cooling by contrails cannot be excluded. This may in part explain low efficacy values for contrails found in previous global circulation model studies. Possible implications of this study are discussed. Since the results of this study are model dependent, they should be tested with a comprehensive climate model in the future.

  14. Sensitivity of point scale surface runoff predictions to rainfall resolution

    Directory of Open Access Journals (Sweden)

    A. J. Hearman

    2007-01-01

    averaged rainfall under these soil and rainfall conditions and predictions of larger scale phenomena such as hillslope runoff and runon. It offers insight into how rainfall resolution can affect predicted amounts of water entering the soil and thus soil water storage and drainage, possibly changing our understanding of the ecological functioning of the system or predictions of agri-chemical leaching. The application of this sensitivity analysis to different rainfall regions in Western Australia showed that locations in the tropics with higher intensity rainfalls are more likely to have differences in infiltration excess predictions with different rainfall resolutions and that a general understanding of the prevailing rainfall conditions and the soil's infiltration capacity can help in deciding whether high rainfall resolutions (below 1 h are required for accurate surface runoff predictions.

  15. The role of sensitivity analysis in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Hirschberg, S.; Knochenhauer, M.

    1987-01-01

    The paper describes several items suitable for close examination by means of application of sensitivity analysis, when performing a level 1 PSA. Sensitivity analyses are performed with respect to; (1) boundary conditions, (2) operator actions, and (3) treatment of common cause failures (CCFs). The items of main interest are identified continuously in the course of performing a PSA, as well as by scrutinising the final results. The practical aspects of sensitivity analysis are illustrated by several applications from a recent PSA study (ASEA-ATOM BWR 75). It is concluded that sensitivity analysis leads to insights important for analysts, reviewers and decision makers. (orig./HP)

  16. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    Science.gov (United States)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  17. Sensitivity analysis of Takagi-Sugeno-Kang rainfall-runoff fuzzy models

    Directory of Open Access Journals (Sweden)

    A. P. Jacquin

    2009-01-01

    Full Text Available This paper is concerned with the sensitivity analysis of the model parameters of the Takagi-Sugeno-Kang fuzzy rainfall-runoff models previously developed by the authors. These models are classified in two types of fuzzy models, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis and Sobol's variance decomposition. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of several measures of goodness of fit, assessing the model performance from different points of view. These measures include the Nash-Sutcliffe criteria, volumetric errors and peak errors. The results show that the sensitivity of the model parameters depends on both the catchment type and the measure used to assess the model performance.

  18. Trace drug analysis by surface-enhanced Raman spectroscopy

    Science.gov (United States)

    Farquharson, Stuart; Lee, Vincent Y.

    2000-12-01

    Drug overdose involves more than 10 percent of emergency room (ER) cases, and a method to rapidly identify and quantify the abused drug is critical to the ability of the ER physician to administer the appropriate care. To this end, we have been developing a surface-enhanced Raman (SER) active material capable of detecting target drugs at physiological concentrations in urine. The SER-active material consists of a metal-doped sol-gel that provides not only a million fold increase in sensitivity but also reproducible measurements. The porous silica network offers a unique environment for stabilizing SER active metal particles and the high surface area increase the interaction between the analyte and metal particles. The sol-gel has been coated on the inside walls of glass samples vials, such that urine specimens may simply be introduced for analysis. Here we present the surface-enhanced Raman spectra of a series of barbiturates, actual urine specimens, and a drug 'spiked' urine specimen. The utility of pH adjustment to suppress dominant biochemicals associated with urine is also presented.

  19. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  20. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  1. Cones fabricated by 3D nanoimprint lithography for highly sensitive surface enhanced Raman spectroscopy

    International Nuclear Information System (INIS)

    Wu Wei; Hu Min; Ou Fungsuong; Li Zhiyong; Williams, R Stanley

    2010-01-01

    We demonstrated a cost-effective and deterministic method of patterning 3D cone arrays over a large area by using nanoimprint lithography (NIL). Cones with tip radius of less than 10 nm were successfully duplicated onto the UV-curable imprint resist materials from the silicon cone templates. Such cone structures were shown to be a versatile platform for developing reliable, highly sensitive surface enhanced Raman spectroscopy (SERS) substrates. In contrast to the silicon nanocones, the SERS substrates based on the Au coated cones made by the NIL offered significant improvement of the SERS signal. A further improvement of the SERS signal was observed when the polymer cones were imprinted onto a reflective metallic mirror surface. A sub-zeptomole detection sensitivity for a model molecule, trans-1,2-bis(4-pyridyl)-ethylene (BPE), on the Au coated NIL cone surfaces was achieved.

  2. Sensitivity Analysis of the Agricultural Policy/Environmental eXtender (APEX) for Phosphorus Loads in Tile-Drained Landscapes.

    Science.gov (United States)

    Ford, W; King, K; Williams, M; Williams, J; Fausey, N

    2015-07-01

    Numerical modeling is an economical and feasible approach for quantifying the effects of best management practices on dissolved reactive phosphorus (DRP) loadings from agricultural fields. However, tools that simulate both surface and subsurface DRP pathways are limited and have not been robustly evaluated in tile-drained landscapes. The objectives of this study were to test the ability of the Agricultural Policy/Environmental eXtender (APEX), a widely used field-scale model, to simulate surface and tile P loadings over management, hydrologic, biologic, tile, and soil gradients and to better understand the behavior of P delivery at the edge-of-field in tile-drained midwestern landscapes. To do this, a global, variance-based sensitivity analysis was performed, and model outputs were compared with measured P loads obtained from 14 surface and subsurface edge-of-field sites across central and northwestern Ohio. Results of the sensitivity analysis showed that response variables for DRP were highly sensitive to coupled interactions between presumed important parameters, suggesting nonlinearity of DRP delivery at the edge-of-field. Comparison of model results to edge-of-field data showcased the ability of APEX to simulate surface and subsurface runoff and the associated DRP loading at monthly to annual timescales; however, some high DRP concentrations and fluxes were not reflected in the model, suggesting the presence of preferential flow. Results from this study provide new insights into baseline tile DRP loadings that exceed thresholds for algal proliferation. Further, negative feedbacks between surface and subsurface DRP delivery suggest caution is needed when implementing DRP-based best management practices designed for a specific flow pathway. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  3. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  4. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    Science.gov (United States)

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  5. Position-sensitive radiation monitoring (surface contamination monitor). Innovative technology summary report

    International Nuclear Information System (INIS)

    1999-06-01

    The Shonka Research Associates, Inc. Position-Sensitive Radiation Monitor both detects surface radiation and prepares electronic survey map/survey report of surveyed area automatically. The electronically recorded map can be downloaded to a personal computer for review and a map/report can be generated for inclusion in work packages. Switching from beta-gamma detection to alpha detection is relatively simple and entails moving a switch position to alpha and adjusting the voltage level to an alpha detection level. No field calibration is required when switching from beta-gamma to alpha detection. The system can be used for free-release surveys because it meets the federal detection level sensitivity limits requires for surface survey instrumentation. This technology is superior to traditionally-used floor contamination monitor (FCM) and hand-held survey instrumentation because it can precisely register locations of radioactivity and accurately correlate contamination levels to specific locations. Additionally, it can collect and store continuous radiological data in database format, which can be used to produce real-time imagery as well as automated graphics of survey data. Its flexible design can accommodate a variety of detectors. The cost of the innovative technology is 13% to 57% lower than traditional methods. This technology is suited for radiological surveys of flat surfaces at US Department of Energy (DOE) nuclear facility decontamination and decommissioning (D and D) sites or similar public or commercial sites

  6. Position-sensitive radiation monitoring (surface contamination monitor). Innovative technology summary report

    Energy Technology Data Exchange (ETDEWEB)

    1999-06-01

    The Shonka Research Associates, Inc. Position-Sensitive Radiation Monitor both detects surface radiation and prepares electronic survey map/survey report of surveyed area automatically. The electronically recorded map can be downloaded to a personal computer for review and a map/report can be generated for inclusion in work packages. Switching from beta-gamma detection to alpha detection is relatively simple and entails moving a switch position to alpha and adjusting the voltage level to an alpha detection level. No field calibration is required when switching from beta-gamma to alpha detection. The system can be used for free-release surveys because it meets the federal detection level sensitivity limits requires for surface survey instrumentation. This technology is superior to traditionally-used floor contamination monitor (FCM) and hand-held survey instrumentation because it can precisely register locations of radioactivity and accurately correlate contamination levels to specific locations. Additionally, it can collect and store continuous radiological data in database format, which can be used to produce real-time imagery as well as automated graphics of survey data. Its flexible design can accommodate a variety of detectors. The cost of the innovative technology is 13% to 57% lower than traditional methods. This technology is suited for radiological surveys of flat surfaces at US Department of Energy (DOE) nuclear facility decontamination and decommissioning (D and D) sites or similar public or commercial sites.

  7. Sensitivity analysis using two-dimensional models of the Whiteshell geosphere

    Energy Technology Data Exchange (ETDEWEB)

    Scheier, N. W.; Chan, T.; Stanchell, F. W.

    1992-12-01

    As part of the assessment of the environmental impact of disposing of immobilized nuclear fuel waste in a vault deep within plutonic rock, detailed modelling of groundwater flow, heat transport and containment transport through the geosphere is being performed using the MOTIF finite-element computer code. The first geosphere model is being developed using data from the Whiteshell Research Area, with a hypothetical disposal vault at a depth of 500 m. This report briefly describes the conceptual model and then describes in detail the two-dimensional simulations used to help initially define an adequate three-dimensional representation, select a suitable form for the simplified model to be used in the overall systems assessment with the SYVAC computer code, and perform some sensitivity analysis. The sensitivity analysis considers variations in the rock layer properties, variations in fracture zone configurations, the impact of grouting a vault/fracture zone intersection, and variations in boundary conditions. This study shows that the configuration of major fracture zones can have a major influence on groundwater flow patterns. The flows in the major fracture zones can have high velocities and large volumes. The proximity of the radionuclide source to a major fracture zone may strongly influence the time it takes for a radionuclide to be transported to the surface. (auth)

  8. Surface analysis the principal techniques

    CERN Document Server

    Vickerman, John C

    2009-01-01

    This completely updated and revised second edition of Surface Analysis: The Principal Techniques, deals with the characterisation and understanding of the outer layers of substrates, how they react, look and function which are all of interest to surface scientists. Within this comprehensive text, experts in each analysis area introduce the theory and practice of the principal techniques that have shown themselves to be effective in both basic research and in applied surface analysis. Examples of analysis are provided to facilitate the understanding of this topic and to show readers how they c

  9. Sensitivity analysis of the RESRAD, a dose assessment code

    International Nuclear Information System (INIS)

    Yu, C.; Cheng, J.J.; Zielen, A.J.

    1991-01-01

    The RESRAD code is a pathway analysis code that is designed to calculate radiation doses and derive soil cleanup criteria for the US Department of Energy's environmental restoration and waste management program. the RESRAD code uses various pathway and consumption-rate parameters such as soil properties and food ingestion rates in performing such calculations and derivations. As with any predictive model, the accuracy of the predictions depends on the accuracy of the input parameters. This paper summarizes the results of a sensitivity analysis of RESRAD input parameters. Three methods were used to perform the sensitivity analysis: (1) Gradient Enhanced Software System (GRESS) sensitivity analysis software package developed at oak Ridge National Laboratory; (2) direct perturbation of input parameters; and (3) built-in graphic package that shows parameter sensitivities while the RESRAD code is operational

  10. Sensitivity analysis in a structural reliability context

    International Nuclear Information System (INIS)

    Lemaitre, Paul

    2014-01-01

    This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored. The proposed methods are then applied on the CWNR case, which motivates this thesis. (author)

  11. Skin Sensitive Difference of Human Body Sections under Clothing--Multiple Analysis of Skin Surface Temperature Changes

    Institute of Scientific and Technical Information of China (English)

    李俊; 吴海燕; 张渭源

    2003-01-01

    A new researching method on clothing comfort perception is developed.By it the skin surface temperature changes and subjective psychological perception of human body sections stimulated by the same cold stimulation are studied.With the multiple comparison analysis method the changing laws of skin temperature of main human body sections is obtained.

  12. Simulation-Based Stochastic Sensitivity Analysis of a Mach 4.5 Mixed-Compression Intake Performance

    Science.gov (United States)

    Kato, H.; Ito, K.

    2009-01-01

    A sensitivity analysis of a supersonic mixed-compression intake of a variable-cycle turbine-based combined cycle (TBCC) engine is presented. The TBCC engine is de- signed to power a long-range Mach 4.5 transport capable of antipodal missions studied in the framework of an EU FP6 project, LAPCAT. The nominal intake geometry was designed using DLR abpi cycle analysis pro- gram by taking into account various operating require- ments of a typical mission profile. The intake consists of two movable external compression ramps followed by an isolator section with bleed channel. The compressed air is then diffused through a rectangular-to-circular subsonic diffuser. A multi-block Reynolds-averaged Navier- Stokes (RANS) solver with Srinivasan-Tannehill equilibrium air model was used to compute the total pressure recovery and mass capture fraction. While RANS simulation of the nominal intake configuration provides more realistic performance characteristics of the intake than the cycle analysis program, the intake design must also take into account in-flight uncertainties for robust intake performance. In this study, we focus on the effects of the geometric uncertainties on pressure recovery and mass capture fraction, and propose a practical approach to simulation-based sensitivity analysis. The method begins by constructing a light-weight analytical model, a radial-basis function (RBF) network, trained via adaptively sampled RANS simulation results. Using the RBF network as the response surface approximation, stochastic sensitivity analysis is performed using analysis of variance (ANOVA) technique by Sobol. This approach makes it possible to perform a generalized multi-input- multi-output sensitivity analysis based on high-fidelity RANS simulation. The resulting Sobol's influence indices allow the engineer to identify dominant parameters as well as the degree of interaction among multiple parameters, which can then be fed back into the design cycle.

  13. Sensitivity analysis using probability bounding

    International Nuclear Information System (INIS)

    Ferson, Scott; Troy Tucker, W.

    2006-01-01

    Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values

  14. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  15. Sensitivity analysis of ranked data: from order statistics to quantiles

    NARCIS (Netherlands)

    Heidergott, B.F.; Volk-Makarewicz, W.

    2015-01-01

    In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before

  16. Sensitivity analysis in remote sensing

    CERN Document Server

    Ustinov, Eugene A

    2015-01-01

    This book contains a detailed presentation of general principles of sensitivity analysis as well as their applications to sample cases of remote sensing experiments. An emphasis is made on applications of adjoint problems, because they are more efficient in many practical cases, although their formulation may seem counterintuitive to a beginner. Special attention is paid to forward problems based on higher-order partial differential equations, where a novel matrix operator approach to formulation of corresponding adjoint problems is presented. Sensitivity analysis (SA) serves for quantitative models of physical objects the same purpose, as differential calculus does for functions. SA provides derivatives of model output parameters (observables) with respect to input parameters. In remote sensing SA provides computer-efficient means to compute the jacobians, matrices of partial derivatives of observables with respect to the geophysical parameters of interest. The jacobians are used to solve corresponding inver...

  17. Growth and trends in Auger-electron spectroscopy and x-ray photoelectron spectroscopy for surface analysis

    International Nuclear Information System (INIS)

    Powell, C.J.

    2003-01-01

    A perspective is given of the development and use of surface analysis, primarily by Auger-electron spectroscopy (AES) and x-ray photoelectron spectroscopy (XPS), for solving scientific and technological problems. Information is presented on growth and trends in instrumental capabilities, instrumental measurements with reduced uncertainties, knowledge of surface sensitivity, and knowledge and effects of sample morphology. Available analytical resources are described for AES, XPS, and secondary-ion mass spectrometry. Finally, the role of the American Vacuum Society in stimulating improved surface analyses is discussed

  18. Global sensitivity analysis in the identification of cohesive models using full-field kinematic data

    KAUST Repository

    Alfano, Marco; Lubineau, Gilles; Paulino, Glá ucio Hermogenes

    2015-01-01

    Failure of adhesive bonded structures often occurs concurrent with the formation of a non-negligible fracture process zone in front of a macroscopic crack. For this reason, the analysis of damage and fracture is effectively carried out using the cohesive zone model (CZM). The crucial aspect of the CZM approach is the precise determination of the traction-separation relation. Yet it is usually determined empirically, by using calibration procedures combining experimental data, such as load-displacement or crack length data, with finite element simulation of fracture. Thanks to the recent progress in image processing, and the availability of low-cost CCD cameras, it is nowadays relatively easy to access surface displacements across the fracture process zone using for instance Digital Image Correlation (DIC). The rich information provided by correlation techniques prompted the development of versatile inverse parameter identification procedures combining finite element (FE) simulations and full field kinematic data. The focus of the present paper is to assess the effectiveness of these methods in the identification of cohesive zone models. In particular, the analysis is developed in the framework of the variance based global sensitivity analysis. The sensitivity of kinematic data to the sought cohesive properties is explored through the computation of the so-called Sobol sensitivity indexes. The results show that the global sensitivity analysis can help to ascertain the most influential cohesive parameters which need to be incorporated in the identification process. In addition, it is shown that suitable displacement sampling in time and space can lead to optimized measurements for identification purposes.

  19. Global sensitivity analysis in the identification of cohesive models using full-field kinematic data

    KAUST Repository

    Alfano, Marco

    2015-03-01

    Failure of adhesive bonded structures often occurs concurrent with the formation of a non-negligible fracture process zone in front of a macroscopic crack. For this reason, the analysis of damage and fracture is effectively carried out using the cohesive zone model (CZM). The crucial aspect of the CZM approach is the precise determination of the traction-separation relation. Yet it is usually determined empirically, by using calibration procedures combining experimental data, such as load-displacement or crack length data, with finite element simulation of fracture. Thanks to the recent progress in image processing, and the availability of low-cost CCD cameras, it is nowadays relatively easy to access surface displacements across the fracture process zone using for instance Digital Image Correlation (DIC). The rich information provided by correlation techniques prompted the development of versatile inverse parameter identification procedures combining finite element (FE) simulations and full field kinematic data. The focus of the present paper is to assess the effectiveness of these methods in the identification of cohesive zone models. In particular, the analysis is developed in the framework of the variance based global sensitivity analysis. The sensitivity of kinematic data to the sought cohesive properties is explored through the computation of the so-called Sobol sensitivity indexes. The results show that the global sensitivity analysis can help to ascertain the most influential cohesive parameters which need to be incorporated in the identification process. In addition, it is shown that suitable displacement sampling in time and space can lead to optimized measurements for identification purposes.

  20. Bloch surface wave structures for high sensitivity detection and compact waveguiding

    Science.gov (United States)

    Khan, Muhammad Umar; Corbett, Brian

    2016-01-01

    Resonant propagating waves created on the surface of a dielectric multilayer stack, called Bloch surface waves (BSW), can be designed for high sensitivity monitoring of the adjacent refractive index as an alternative platform to the metal-based surface plasmon resonance (SPR) sensing. The resonant wavelength and polarization can be designed by engineering of the dielectric layers unlike the fixed resonance of SPR, while the wide bandwidth low loss of dielectrics permits sharper resonances, longer propagation lengths and thus their use in waveguiding devices. The transparency of the dielectrics allows the excitation and monitoring of surface-bound fluorescent molecules. We review the recent developments in this technology. We show the advantages that can be obtained by using high index contrast layered structures. Operating at 1550 nm wavelengths will allow the BSW sensors to be implemented in the silicon photonics platform where active waveguiding can be used in the realization of compact planar integrated circuits for multi-parameter sensing.

  1. An analysis of sensitivity and uncertainty associated with the use of the HSPF model for EIA applications

    Energy Technology Data Exchange (ETDEWEB)

    Biftu, G.F.; Beersing, A.; Wu, S.; Ade, F. [Golder Associates, Calgary, AB (Canada)

    2005-07-01

    An outline of a new approach to assessing the sensitivity and uncertainty associated with surface water modelling results using Hydrological Simulation Program-Fortran (HSPF) was presented, as well as the results of a sensitivity and uncertainty analysis. The HSPF model is often used to characterize the hydrological processes in watersheds within the oil sands region. Typical applications of HSPF included calibration of the model parameters using data from gauged watersheds, as well as validation of calibrated models with data sets. Additionally, simulations are often conducted to make flow predictions to support the environmental impact assessment (EIA) process. However, a key aspect of the modelling components of the EIA process is the sensitivity and uncertainty of the modelling results as compared to model parameters. Many of the variations in the HSPF model's outputs are caused by a small number of model parameters and outputs. A sensitivity analysis was performed to identify and focus on key parameters and assumptions that have the most influence on the model's outputs. Analysis entailed varying each parameter in turn, within a range, and examining the resulting relative changes in the model outputs. This analysis consisted of the selection of probability distributions to characterize the uncertainty in the model's key sensitive parameters, as well as the use of Monte Carlo and HSPF simulation to determine the uncertainty in model outputs. tabs, figs.

  2. Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2014-11-01

    Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.

  3. Sprayed zinc oxide films: Ultra-violet light-induced reversible surface wettability and platinum-sensitization-assisted improved liquefied petroleum gas response.

    Science.gov (United States)

    Nakate, Umesh T; Patil, Pramila; Bulakhe, R N; Lokhande, C D; Kale, Sangeeta N; Naushad, Mu; Mane, Rajaram S

    2016-10-15

    We report the rapid (superhydrophobic to superhydrophilic) transition property and improvement in the liquefied petroleum gas (LPG) sensing response of zinc oxide (ZnO) nanorods (NRs) on UV-irradiation and platinum (Pt) surface sensitization, respectively. The morphological evolution of ZnO NRs is evidenced from the field emission scanning electron microscope and atomic force microscope digital images and for the structural elucidation X-ray diffraction pattern is used. Elemental survey mapping is obtained from energy dispersive X-ray analysis spectrum. The optical properties have been studied by UV-Visible and photoluminescence spectroscopy measurements. The rapid (120sec) conversion of superhydrophobic (154°) ZnO NRs film to superhydrophilic (7°) is obtained under UV light illumination and the superhydrophobicity is regained by storing sample in dark. The mechanism for switching wettability behavior of ZnO NRs has thoroughly been discussed. In second phase, Pt-sensitized ZnO NRs film has demonstrated considerable gas sensitivity at 260ppm concentration of LPG. At 623K operating temperature, the maximum LPG response of 58% and the response time of 49sec for 1040ppm LPG concentration of Pt- sensitized ZnO NRs film are obtained. This higher LPG response of Pt-sensitized ZnO NRs film over pristine is primarily due to electronic effect and catalytic effect (spill-over effect) caused by an additional of Pt on ZnO NRs film surface. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Transition between bulk and surface refractive index sensitivity of micro-cavity in-line Mach-Zehnder interferometer induced by thin film deposition.

    Science.gov (United States)

    Śmietana, Mateusz; Janik, Monika; Koba, Marcin; Bock, Wojtek J

    2017-10-16

    In this work we discuss the refractive index (RI) sensitivity of a micro-cavity in-line Mach-Zehnder interferometer in the form of a cylindrical hole (40-50 μm in diameter) fabricated in a standard single-mode optical fiber using a femtosecond laser. The surface of the micro-cavity was coated with up to 400 nm aluminum oxide thin film using the atomic layer deposition method. Next, the film was progressively chemically etched and the influence on changes in the RI of liquid in the micro-cavity was determined at different stages of the experiment, i.e., at different thicknesses of the film. An effect of transition between sensitivity to the film thickness (surface) and the RI of liquid in the cavity (bulk) is demonstrated for the first time. We have found that depending on the interferometer working conditions determined by thin film properties, the device can be used for investigation of phenomena taking place at the surface, such as in case of specific label-free biosensing applications, or for small-volume RI analysis as required in analytical chemistry.

  5. A general first-order global sensitivity analysis method

    International Nuclear Information System (INIS)

    Xu Chonggang; Gertner, George Zdzislaw

    2008-01-01

    Fourier amplitude sensitivity test (FAST) is one of the most popular global sensitivity analysis techniques. The main mechanism of FAST is to assign each parameter with a characteristic frequency through a search function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency. Although FAST has been widely applied, there are two limitations: (1) the aliasing effect among parameters by using integer characteristic frequencies and (2) the suitability for only models with independent parameters. In this paper, we synthesize the improvement to overcome the aliasing effect limitation [Tarantola S, Gatelli D, Mara TA. Random balance designs for the estimation of first order global sensitivity indices. Reliab Eng Syst Safety 2006; 91(6):717-27] and the improvement to overcome the independence limitation [Xu C, Gertner G. Extending a global sensitivity analysis technique to models with correlated parameters. Comput Stat Data Anal 2007, accepted for publication]. In this way, FAST can be a general first-order global sensitivity analysis method for linear/nonlinear models with as many correlated/uncorrelated parameters as the user specifies. We apply the general FAST to four test cases with correlated parameters. The results show that the sensitivity indices derived by the general FAST are in good agreement with the sensitivity indices derived by the correlation ratio method, which is a non-parametric method for models with correlated parameters

  6. Enhanced sensitivity of surface plasmon resonance phase-interrogation biosensor by using oblique deposited silver nanorods.

    Science.gov (United States)

    Chung, Hung-Yi; Chen, Chih-Chia; Wu, Pin Chieh; Tseng, Ming Lun; Lin, Wen-Chi; Chen, Chih-Wei; Chiang, Hai-Pang

    2014-01-01

    Sensitivity of surface plasmon resonance phase-interrogation biosensor is demonstrated to be enhanced by oblique deposited silver nanorods. Silver nanorods are thermally deposited on silver nanothin film by oblique angle deposition (OAD). The length of the nanorods can be tuned by controlling the deposition parameters of thermal deposition. By measuring the phase difference between the p and s waves of surface plasmon resonance heterodyne interferometer with different wavelength of incident light, we have demonstrated that maximum sensitivity of glucose detection down to 7.1 × 10(-8) refractive index units could be achieved with optimal deposition parameters of silver nanorods.

  7. Time-dependent reliability sensitivity analysis of motion mechanisms

    International Nuclear Information System (INIS)

    Wei, Pengfei; Song, Jingwen; Lu, Zhenzhou; Yue, Zhufeng

    2016-01-01

    Reliability sensitivity analysis aims at identifying the source of structure/mechanism failure, and quantifying the effects of each random source or their distribution parameters on failure probability or reliability. In this paper, the time-dependent parametric reliability sensitivity (PRS) analysis as well as the global reliability sensitivity (GRS) analysis is introduced for the motion mechanisms. The PRS indices are defined as the partial derivatives of the time-dependent reliability w.r.t. the distribution parameters of each random input variable, and they quantify the effect of the small change of each distribution parameter on the time-dependent reliability. The GRS indices are defined for quantifying the individual, interaction and total contributions of the uncertainty in each random input variable to the time-dependent reliability. The envelope function method combined with the first order approximation of the motion error function is introduced for efficiently estimating the time-dependent PRS and GRS indices. Both the time-dependent PRS and GRS analysis techniques can be especially useful for reliability-based design. This significance of the proposed methods as well as the effectiveness of the envelope function method for estimating the time-dependent PRS and GRS indices are demonstrated with a four-bar mechanism and a car rack-and-pinion steering linkage. - Highlights: • Time-dependent parametric reliability sensitivity analysis is presented. • Time-dependent global reliability sensitivity analysis is presented for mechanisms. • The proposed method is especially useful for enhancing the kinematic reliability. • An envelope method is introduced for efficiently implementing the proposed methods. • The proposed method is demonstrated by two real planar mechanisms.

  8. Combining inkjet printing and sol-gel chemistry for making pH-sensitive surfaces.

    Science.gov (United States)

    Orsi, Gianni; De Maria, Carmelo; Montemurro, Francesca; Chauhan, Veeren M; Aylott, Jonathan W; Vozzi, Giovanni

    2015-01-01

    Today biomedical sciences are experiencing the importance of imaging biological parameters with luminescence methods. Studying 2D pH distribution with those methods allows building knowledge about complex cellular processes. Immobilizing pH sensitive nanoparticles inside hydrogel matrixes, in order to guarantee a proper SNR, could easily make stable and biocompatible 2D sensors. Inkjet printing is also well known as tool for printing images onto porous surfaces. Recently it has been used as a free-form fabrication method for building three-dimensional parts, and now is being explored as a way of printing electrical and optical devices. Inkjet printing was used either as a rapid prototyping method for custom biosensors. Sol-gel method is naturally bound with inkjet, because the picoliter-sized ink droplets evaporate quickly, thus allowing quick sol-gel transitions on the printed surface. In this work will be shown how to merge those technologies, in order to make a nanoparticles doped printable hydrogel, which could be used for making 2D/3D smart scaffolds able to monitor cell activities. An automated image analysis system was developed in order to quickly have the pH measurements from pH nanosensors fluorescence images.

  9. Recent advances in surface plasmon resonance imaging: detection speed, sensitivity, and portability

    Directory of Open Access Journals (Sweden)

    Zeng Youjun

    2017-06-01

    Full Text Available Surface plasmon resonance (SPR biosensor is a powerful tool for studying the kinetics of biomolecular interactions because they offer unique real-time and label-free measurement capabilities with high detection sensitivity. In the past two decades, SPR technology has been successfully commercialized and its performance has continuously been improved with lots of engineering efforts. In this review, we describe the recent advances in SPR technologies. The developments of SPR technologies focusing on detection speed, sensitivity, and portability are discussed in details. The incorporation of imaging techniques into SPR sensing is emphasized. In addition, our SPR imaging biosensors based on the scanning of wavelength by a solid-state tunable wavelength filter are highlighted. Finally, significant advances of the vast developments in nanotechnology-associated SPR sensing for sensitivity enhancements are also reviewed. It is hoped that this review will provide some insights for researchers who are interested in SPR sensing, and help them develop SPR sensors with better sensitivity and higher throughput.

  10. Recent advances in surface plasmon resonance imaging: detection speed, sensitivity, and portability

    Science.gov (United States)

    Zeng, Youjun; Hu, Rui; Wang, Lei; Gu, Dayong; He, Jianan; Wu, Shu-Yuen; Ho, Ho-Pui; Li, Xuejin; Qu, Junle; Gao, Bruce Zhi; Shao, Yonghong

    2017-06-01

    Surface plasmon resonance (SPR) biosensor is a powerful tool for studying the kinetics of biomolecular interactions because they offer unique real-time and label-free measurement capabilities with high detection sensitivity. In the past two decades, SPR technology has been successfully commercialized and its performance has continuously been improved with lots of engineering efforts. In this review, we describe the recent advances in SPR technologies. The developments of SPR technologies focusing on detection speed, sensitivity, and portability are discussed in details. The incorporation of imaging techniques into SPR sensing is emphasized. In addition, our SPR imaging biosensors based on the scanning of wavelength by a solid-state tunable wavelength filter are highlighted. Finally, significant advances of the vast developments in nanotechnology-associated SPR sensing for sensitivity enhancements are also reviewed. It is hoped that this review will provide some insights for researchers who are interested in SPR sensing, and help them develop SPR sensors with better sensitivity and higher throughput.

  11. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    Energy Technology Data Exchange (ETDEWEB)

    Ekstroem, P.A.; Broed, R. [Facilia AB, Stockholm, (Sweden)

    2006-05-15

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several

  12. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    International Nuclear Information System (INIS)

    Ekstroem, P.A.; Broed, R.

    2006-05-01

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several linked

  13. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Science.gov (United States)

    Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2018-03-01

    We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  14. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Directory of Open Access Journals (Sweden)

    Jinchao Feng

    2018-03-01

    Full Text Available We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data. The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  15. Comparison of the sensitivity of surface downward longwave radiation to changes in water vapor at two high elevation sites

    International Nuclear Information System (INIS)

    Chen, Yonghua; Naud, Catherine M; Rangwala, Imtiaz; Landry, Christopher C; Miller, James R

    2014-01-01

    Among the potential reasons for enhanced warming rates in many high elevation regions is the nonlinear relationship between surface downward longwave radiation (DLR) and specific humidity (q). In this study we use ground-based observations at two neighboring high elevation sites in Southwestern Colorado that have different local topography and are 1.3 km apart horizontally and 348 m vertically. We examine the spatial consistency of the sensitivities (partial derivatives) of DLR with respect to changes in q, and the sensitivities are obtained from the Jacobian matrix of a neural network analysis. Although the relationship between DLR and q is the same at both sites, the sensitivities are higher when q is smaller, which occurs more frequently at the higher elevation site. There is a distinct hourly distribution in the sensitivities at both sites especially for high sensitivity cases, although the range is greater at the lower elevation site. The hourly distribution of the sensitivities relates to that of q. Under clear skies during daytime, q is similar between the two sites, however under cloudy skies or at night, it is not. This means that the DLR–q sensitivities are similar at the two sites during daytime but not at night, and care must be exercised when using data from one site to infer the impact of water vapor feedbacks at another site, particularly at night. Our analysis suggests that care should be exercised when using the lapse rate adjustment to infill high frequency data in a complex topographical region, particularly when one of the stations is subject to cold air pooling as found here. (letter)

  16. Comparison of the Sensitivity of Surface Downward Longwave Radiation to Changes in Water Vapor at Two High Elevation Sites

    Science.gov (United States)

    Chen, Yonghua; Naud, Catherine M.; Rangwala, Imtiaz; Landry, Christopher C.; Miller, James R.

    2014-01-01

    Among the potential reasons for enhanced warming rates in many high elevation regions is the nonlinear relationship between surface downward longwave radiation (DLR) and specific humidity (q). In this study we use ground-based observations at two neighboring high elevation sites in Southwestern Colorado that have different local topography and are 1.3 kilometers apart horizontally and 348 meters vertically. We examine the spatial consistency of the sensitivities (partial derivatives) of DLR with respect to changes in q, and the sensitivities are obtained from the Jacobian matrix of a neural network analysis. Although the relationship between DLR and q is the same at both sites, the sensitivities are higher when q is smaller, which occurs more frequently at the higher elevation site. There is a distinct hourly distribution in the sensitivities at both sites especially for high sensitivity cases, although the range is greater at the lower elevation site. The hourly distribution of the sensitivities relates to that of q. Under clear skies during daytime, q is similar between the two sites, however under cloudy skies or at night, it is not. This means that the DLR-q sensitivities are similar at the two sites during daytime but not at night, and care must be exercised when using data from one site to infer the impact of water vapor feedbacks at another site, particularly at night. Our analysis suggests that care should be exercised when using the lapse rate adjustment to infill high frequency data in a complex topographical region, particularly when one of the stations is subject to cold air pooling as found here.

  17. Survey of sampling-based methods for uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.

    2006-01-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition

  18. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  19. Systemization of burnup sensitivity analysis code

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2004-02-01

    To practical use of fact reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoints of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor core 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, development of a analysis code for burnup sensitivity, SAGEP-BURN, has been done and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to user due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functionalities in the existing large system. It is not sufficient to unify each computational component for some reasons; computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For this

  20. To Fill or Not to Fill: Sensitivity Analysis of the Influence of Resolution and Hole Filling on Point Cloud Surface Modeling and Individual Rockfall Event Detection

    Directory of Open Access Journals (Sweden)

    Michael J. Olsen

    2015-09-01

    Full Text Available Monitoring unstable slopes with terrestrial laser scanning (TLS has been proven effective. However, end users still struggle immensely with the efficient processing, analysis, and interpretation of the massive and complex TLS datasets. Two recent advances described in this paper now improve the ability to work with TLS data acquired on steep slopes. The first is the improved processing of TLS data to model complex topography and fill holes. This processing step results in a continuous topographic surface model that seamlessly characterizes the rock and soil surface. The second is an advance in the automated interpretation of the surface model in such a way that a magnitude and frequency relationship of rockfall events can be quantified, which can be used to assess maintenance strategies and forecast costs. The approach is applied to unstable highway slopes in the state of Alaska, U.S.A. to evaluate its effectiveness. Further, the influence of the selected model resolution and degree of hole filling on the derived slope metrics were analyzed. In general, model resolution plays a pivotal role in the ability to detect smaller rockfall events when developing magnitude-frequency relationships. The total volume estimates are also influenced by model resolution, but were comparatively less sensitive. In contrast, hole filling had a noticeable effect on magnitude-frequency relationships but to a lesser extent than modeling resolution. However, hole filling yielded a modest increase in overall volumetric quantity estimates. Optimal analysis results occur when appropriately balancing high modeling resolution with an appropriate level of hole filling.

  1. Fabrication of an SPR Sensor Surface with Antifouling Properties for Highly Sensitive Detection of 2,4,6-Trinitrotoluene Using Surface-Initiated Atom Transfer Polymerization

    Directory of Open Access Journals (Sweden)

    Kiyoshi Toko

    2013-07-01

    Full Text Available In this study, we modified a surface plasmon resonance immunosensor chip with a polymer using surface-initiated atom transfer polymerization (SI-ATRP for the highly sensitive detection of 2,4,6-trinitrotoluene (TNT. To immobilize a TNT analogue on the polymer, mono-2-(methacryloyloxyethylsuccinate (MES, which has a carboxyl group, was used in this study. However, the anti-TNT antibody may adsorb non-specifically on the polymer surface by an electrostatic interaction because MES is negatively charged. Therefore, a mixed monomer with MES and diethylaminoethylmethacrylate (DEAEM, which has a tertiary amino group and is positively charged, was prepared to obtain electroneutrality for suppressing the nonspecific adsorption. The detection of TNT was performed by inhibition assay using the polymer surface. To ensure high sensitivity to TNT, the affinity between the surface and the antibody was optimized by controlling the density of the initiator for ATRP by mixing two types of self-assembled monolayer reagents. As a result, a limit of detection of 5.7 pg/mL (ppt for TNT was achieved using the optimized surface.

  2. Global sensitivity analysis by polynomial dimensional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2011-07-15

    This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.

  3. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  4. Below-surface analysis of inclusions with PIXE and PIGE

    International Nuclear Information System (INIS)

    MacArthur, J.D.; Ma, X.P.; Palmer, G.R.; Anderson, A.J.; Clark, A.H.

    1990-01-01

    The composition of fluid inclusions in host minerals holds much information about the chemical environment of mineral formation. When solid inclusions are exposed through polishing, their content can readily be investigated with an electron or proton probe. However, with an electron probe, only the daughter minerals or the residue material left when a fluid inclusion is opened can be analyzed since electrons with energies of tens of keV cannot penetrate to the unexposed inclusion. On the other hand, proton beams of a few MeV can penetrate a few tens of μm of material and still be able to excite characteristic radiation. This phenomenon has been exploited for the analysis of subsurface inclusions. Ideally, standard petrographic sections are polished to that inclusions, targetted for analysis, are brought to within 10 μm of the surface. The overlying matrix reduces the sensitivity of PIXE for the elements of low Z such as Na and Al because of the attenuation of the X-rays. However, these elements, as well as elements of even lower Z, which cannot be analyzed with the electron probe, can readily be detected with PIGE at good sensitivity. (orig.)

  5. Beyond sensitivity analysis

    DEFF Research Database (Denmark)

    Lund, Henrik; Sorknæs, Peter; Mathiesen, Brian Vad

    2018-01-01

    of electricity, which have been introduced in recent decades. These uncertainties pose a challenge to the design and assessment of future energy strategies and investments, especially in the economic assessment of renewable energy versus business-as-usual scenarios based on fossil fuels. From a methodological...... point of view, the typical way of handling this challenge has been to predict future prices as accurately as possible and then conduct a sensitivity analysis. This paper includes a historical analysis of such predictions, leading to the conclusion that they are almost always wrong. Not only...... are they wrong in their prediction of price levels, but also in the sense that they always seem to predict a smooth growth or decrease. This paper introduces a new method and reports the results of applying it on the case of energy scenarios for Denmark. The method implies the expectation of fluctuating fuel...

  6. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    Science.gov (United States)

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    2017-01-01

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  7. UV sensitivity of planktonic net community production in ocean surface waters

    Science.gov (United States)

    Regaudie-de-Gioux, Aurore; Agustí, Susana; Duarte, Carlos M.

    2014-05-01

    The net plankton community metabolism of oceanic surface waters is particularly important as it more directly affects the partial pressure of CO2 in surface waters and thus the air-sea fluxes of CO2. Plankton communities in surface waters are exposed to high irradiance that includes significant ultraviolet blue (UVB, 280-315 nm) radiation. UVB radiation affects both photosynthetic and respiration rates, increase plankton mortality rates, and other metabolic and chemical processes. Here we test the sensitivity of net community production (NCP) to UVB of planktonic communities in surface waters across contrasting regions of the ocean. We observed here that UVB radiation affects net plankton community production at the ocean surface, imposing a shift in NCP by, on average, 50% relative to the values measured when excluding partly UVB. Our results show that under full solar radiation, the metabolic balance shows the prevalence of net heterotrophic community production. The demonstration of an important effect of UVB radiation on NCP in surface waters presented here is of particular relevance in relation to the increased UVB radiation derived from the erosion of the stratospheric ozone layer. Our results encourage design future research to further our understanding of UVB effects on the metabolic balance of plankton communities.

  8. Risk Characterization uncertainties associated description, sensitivity analysis

    International Nuclear Information System (INIS)

    Carrillo, M.; Tovar, M.; Alvarez, J.; Arraez, M.; Hordziejewicz, I.; Loreto, I.

    2013-01-01

    The power point presentation is about risks to the estimated levels of exposure, uncertainty and variability in the analysis, sensitivity analysis, risks from exposure to multiple substances, formulation of guidelines for carcinogenic and genotoxic compounds and risk subpopulations

  9. Modeling, design, packing and experimental analysis of liquid-phase shear-horizontal surface acoustic wave sensors

    Science.gov (United States)

    Pollard, Thomas B

    Recent advances in microbiology, computational capabilities, and microelectromechanical-system fabrication techniques permit modeling, design, and fabrication of low-cost, miniature, sensitive and selective liquid-phase sensors and lab-on-a-chip systems. Such devices are expected to replace expensive, time-consuming, and bulky laboratory-based testing equipment. Potential applications for devices include: fluid characterization for material science and industry; chemical analysis in medicine and pharmacology; study of biological processes; food analysis; chemical kinetics analysis; and environmental monitoring. When combined with liquid-phase packaging, sensors based on surface-acoustic-wave (SAW) technology are considered strong candidates. For this reason such devices are focused on in this work; emphasis placed on device modeling and packaging for liquid-phase operation. Regarding modeling, topics considered include mode excitation efficiency of transducers; mode sensitivity based on guiding structure materials/geometries; and use of new piezoelectric materials. On packaging, topics considered include package interfacing with SAW devices, and minimization of packaging effects on device performance. In this work novel numerical models are theoretically developed and implemented to study propagation and transduction characteristics of sensor designs using wave/constitutive equations, Green's functions, and boundary/finite element methods. Using developed simulation tools that consider finite-thickness of all device electrodes, transduction efficiency for SAW transducers with neighboring uniform or periodic guiding electrodes is reported for the first time. Results indicate finite electrode thickness strongly affects efficiency. Using dense electrodes, efficiency is shown to approach 92% and 100% for uniform and periodic electrode guiding, respectively; yielding improved sensor detection limits. A numerical sensitivity analysis is presented targeting viscosity

  10. Overview of methods for uncertainty analysis and sensitivity analysis in probabilistic risk assessment

    International Nuclear Information System (INIS)

    Iman, R.L.; Helton, J.C.

    1985-01-01

    Probabilistic Risk Assessment (PRA) is playing an increasingly important role in the nuclear reactor regulatory process. The assessment of uncertainties associated with PRA results is widely recognized as an important part of the analysis process. One of the major criticisms of the Reactor Safety Study was that its representation of uncertainty was inadequate. The desire for the capability to treat uncertainties with the MELCOR risk code being developed at Sandia National Laboratories is indicative of the current interest in this topic. However, as yet, uncertainty analysis and sensitivity analysis in the context of PRA is a relatively immature field. In this paper, available methods for uncertainty analysis and sensitivity analysis in a PRA are reviewed. This review first treats methods for use with individual components of a PRA and then considers how these methods could be combined in the performance of a complete PRA. In the context of this paper, the goal of uncertainty analysis is to measure the imprecision in PRA outcomes of interest, and the goal of sensitivity analysis is to identify the major contributors to this imprecision. There are a number of areas that must be considered in uncertainty analysis and sensitivity analysis for a PRA: (1) information, (2) systems analysis, (3) thermal-hydraulic phenomena/fission product behavior, (4) health and economic consequences, and (5) display of results. Each of these areas and the synthesis of them into a complete PRA are discussed

  11. Sensitivity Analysis for Design Optimization Integrated Software Tools, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposed project is to provide a new set of sensitivity analysis theory and codes, the Sensitivity Analysis for Design Optimization Integrated...

  12. Sensitivity enhancement for nitrophenols using cationic surfactant-modified activated carbon for solid-phase extraction surface-assisted laser desorption/ionization mass spectrometry.

    Science.gov (United States)

    Chen, Y C; Tsai, M F

    2000-01-01

    Previous work has demonstrated that a combination of solid-phase extraction with surface-assisted laser desorption/ionization (SPE-SALDI) mass spectrometry can be applied to the determination of trace nitrophenols in water. An improved method to lower the detection limit of this hyphenated technique is described in this present study. Activated carbon powder is used as both the SPE adsorbent and the SALDI solid in the analysis by SPE-SALDI. The surface of the activated carbon is modified by passing an aqueous solution of a cationic surfactant through the SPE cartridge. The results demonstrate that the sensitivity for nitrophenols in the analysis by SPE-SALDI can be improved by using cationic surfactants to modify the surface of the activated carbon. The detection limit for nitrophenols is about 25 ppt based on a signal-to-noise ratio of 3 by sampling from 100 mL of solution. Copyright 2000 John Wiley & Sons, Ltd.

  13. Application of Stochastic Sensitivity Analysis to Integrated Force Method

    Directory of Open Access Journals (Sweden)

    X. F. Wei

    2012-01-01

    Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.

  14. Carbon dioxide capture processes: Simulation, design and sensitivity analysis

    DEFF Research Database (Denmark)

    Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul

    2012-01-01

    equilibrium and associated property models are used. Simulations are performed to investigate the sensitivity of the process variables to change in the design variables including process inputs and disturbances in the property model parameters. Results of the sensitivity analysis on the steady state...... performance of the process to the L/G ratio to the absorber, CO2 lean solvent loadings, and striper pressure are presented in this paper. Based on the sensitivity analysis process optimization problems have been defined and solved and, a preliminary control structure selection has been made.......Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...

  15. Seismic sensitivity to sub-surface solar activity from 18 yr of GOLF/SoHO observations

    Science.gov (United States)

    Salabert, D.; García, R. A.; Turck-Chièze, S.

    2015-06-01

    Solar activity has significantly changed over the last two Schwabe cycles. After a long and deep minimum at the end of Cycle 23, the weaker activity of Cycle 24 contrasts with the previous cycles. In this work, the response of the solar acoustic oscillations to solar activity is used in order to provide insights into the structural and magnetic changes in the sub-surface layers of the Sun during this on-going unusual period of low activity. We analyze 18 yr of continuous observations of the solar acoustic oscillations collected by the Sun-as-a-star GOLF instrument on board the SoHO spacecraft. From the fitted mode frequencies, the temporal variability of the frequency shifts of the radial, dipolar, and quadrupolar modes are studied for different frequency ranges that are sensitive to different layers in the solar sub-surface interior. The low-frequency modes show nearly unchanged frequency shifts between Cycles 23 and 24, with a time evolving signature of the quasi-biennial oscillation, which is particularly visible for the quadrupole component revealing the presence of a complex magnetic structure. The modes at higher frequencies show frequency shifts that are 30% smaller during Cycle 24, which is in agreement with the decrease observed in the surface activity between Cycles 23 and 24. The analysis of 18 yr of GOLF oscillations indicates that the structural and magnetic changes responsible for the frequency shifts remained comparable between Cycle 23 and Cycle 24 in the deeper sub-surface layers below 1400 km as revealed by the low-frequency modes. The frequency shifts of the higher-frequency modes, sensitive to shallower regions, show that Cycle 24 is magnetically weaker in the upper layers of Sun. Appendices are available in electronic form at http://www.aanda.orgThe following 68 GOLF frequency tables are available and Table A.1 is also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http

  16. A surface acoustic wave humidity sensor with high sensitivity based on electrospun MWCNT/Nafion nanofiber films

    International Nuclear Information System (INIS)

    Lei Sheng; Chen Dajing; Chen Yuquan

    2011-01-01

    Humidity detection has been widely used in a variety of fields. A humidity sensor with high sensitivity is reported in this paper. A surface acoustic wave resonator (SAWR) with high resonance frequency was fabricated as a basic sensitive component. Various nanotechnologies were used to improve the sensor's performance. A multi-walled carbon nanotube/Nafion (MWCNT/Nafion) composite material was prepared as humidity-sensitive films, deposited on the surface of an SAWR by the electrospinning method. The electrospun MWCNT/Nafion nanofiber films showed a three-dimensional (3D) porous structure, which was profitable for improving the sensor's performance. The new nano-water-channel model of Nafion was also applied in the humidity sensing process. Compared to other research, the present sensor showed excellent sensitivity (above 400 kHz/% relative humidity (RH) in the range from 10% RH to 80% RH), good linearity (R 2 > 0.98) and a short response time (∼3 s-63%).

  17. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  18. The role of surface chemical analysis in a study to select replacement processes for TCA vapor degreasing

    Science.gov (United States)

    Lesley, Michael W.; Davis, Lawrence E.; Moulder, John F.; Carlson, Brad A.

    1995-01-01

    The role of surface-sensitive chemical analysis (ESCA, AES, and SIMS) in a study to select a process to replace 1, 1, 1-trichloroethane (TCA) vapor degreasing as a steel and aluminum bonding surface preparation method is described. The effort was primarily concerned with spray-in-air cleaning processes involving aqueous alkaline and semi-aqueous cleaners and a contamination sensitive epoxy-to-metal bondline. While all five cleaners tested produced bonding strength results equal to or better than those produced by vapor degreasing, the aqueous alkaline cleaners yielded results which were superior to those produced by the semi-aqueous cleaners. The main reason for the enhanced performance appears to be a silicate layer left behind by the aqueous alkaline cleaners. The silicate layer increases the polarity of the surface and enhances epoxy-to-metal bonding. On the other hand, one of the semi-aqueous cleaners left a nonpolar carbonaceous residue which appeared to have a negative effect on epoxy-to-metal bonding. Differences in cleaning efficiency between cleaners/processes were also identified. These differences in surface chemistry, which were sufficient to affect bonding, were not detected by conventional chemical analysis techniques.

  19. The clear-sky greenhouse effect sensitivity to a sea surface temperature change

    Science.gov (United States)

    Duvel, J. PH.; Breon, F. M.

    1991-01-01

    The clear-sky greenhouse effect response to a sea surface temperature (SST or Ts) change is studied using outgoing clear-sky longwave radiation measurements from the Earth Radiation Budget Experiment. Considering geographical distributions for July 1987, the relation between the SST, the greenhouse effect (defined as the outgoing infrared flux trapped by atmospheric gases), and the precipitable water vapor content (W), estimated by the Special Sensor Microwave Imager, is analyzed first. A fairly linear relation between W and the normalized greenhouse effect g, is found. On the contrary, the SST dependence of both W and g exhibits nonlinearities with, especially, a large increase for SST above 25 C. This enhanced sensitivity of g and W can be interpreted in part by a corresponding large increase of atmospheric water vapor content related to the transition from subtropical dry regions to equatorial moist regions. Using two years of data (1985 and 1986), the normalized greenhouse effect sensitivity to the sea surface temperature is computed from the interannual variation of monthly mean values.

  20. Identifying drought response of semi-arid aeolian systems using near-surface luminescence profiles and changepoint analysis, Nebraska Sandhills.

    Science.gov (United States)

    Buckland, Catherine; Bailey, Richard; Thomas, David

    2017-04-01

    Two billion people living in drylands are affected by land degradation. Sediment erosion by wind and water removes fertile soil and destabilises landscapes. Vegetation disturbance is a key driver of dryland erosion caused by both natural and human forcings: drought, fire, land use, grazing pressure. A quantified understanding of vegetation cover sensitivities and resultant surface change to forcing factors is needed if the vegetation and landscape response to future climate change and human pressure are to be better predicted. Using quartz luminescence dating and statistical changepoint analysis (Killick & Eckley, 2014) this study demonstrates the ability to identify step-changes in depositional age of near-surface sediments. Lx/Tx luminescence profiles coupled with statistical analysis show the use of near-surface sediments in providing a high-resolution record of recent system response and aeolian system thresholds. This research determines how the environment has recorded and retained sedimentary evidence of drought response and land use disturbances over the last two hundred years across both individual landforms and the wider Nebraska Sandhills. Identifying surface deposition and comparing with records of climate, fire and land use changes allows us to assess the sensitivity and stability of the surface sediment to a range of forcing factors. Killick, R and Eckley, IA. (2014) "changepoint: An R Package for Changepoint Analysis." Journal of Statistical Software, (58) 1-19.

  1. Sensitivity of Distributions of Climate System Properties to Surface Temperature Datasets

    Science.gov (United States)

    Libardoni, A. G.; Forest, C. E.

    2011-12-01

    Predictions of climate change from models depend strongly on the representation of climate system properties emerging from the processes and feedbacks in the models. The quality of any model prediction can be evaluated by determining how well its output reproduces the observed climate system. With this evaluation, the reliability of climate projections derived from the model and provided for policy makers is assessed and quantified. In this study, surface temperature, upper-air temperature, and ocean heat content data are used to constrain the distributions of the parameters that define three climate system properties in the MIT Integrated Global Systems Model: climate sensitivity, the rate of ocean heat uptake into the deep ocean, and net anthropogenic aerosol forcing. In particular, we explore the sensitivity of the distributions to the surface temperature dataset used to estimate the likelihood of model output given the observed climate records. In total, five different reconstructions of past surface temperatures are used and the resulting parameter distribution functions differ from each other. Differences in estimates of climate sensitivity mode and mean are as great as 1 K between the datasets, with an overall range of 1.2 to 5.3 K using the 5-95 confidence intervals. Ocean effective diffusivity is poorly constrained regardless of which dataset is used. All distributions show broad distributions and only three show signs of a distribution mode. When a mode is present, they tend to be for low diffusivity values. Distributions for the net aerosol forcing show similar shapes and cluster into two groups that are shifted by approximately 0.1 watts per square meter. However, the overall spread of forcing values from the 5-95 confidence interval, -0.19 to -0.83 watts per square meter, is small compared to other uncertainties in climate forcings. Transient climate response estimates derived from these distributions range between 0.87 and 2.41 K. Similar to the

  2. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    Science.gov (United States)

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  3. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    KAUST Repository

    Navarro, María

    2016-12-26

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  4. Spatial resolution in depth-controlled surface sensitive x-ray techniques

    International Nuclear Information System (INIS)

    Yun, W.B.; Viccaro, P.J.

    1992-01-01

    The spatial resolution along the surface normal and the total depth probed are two important parameters in depth-controlled surface sensitive X-ray techniques employing grazing incidence geometry. The two parameters are analyzed in terms of optical properties (refractive indices) of the media involved and parameters of the incident X-ray beam: beam divergence, X-ray energy, and spectral bandwidth. We derive analytical expressions of the required beam divergence and spectral bandwidth of the incident beam as a function of the two parameters. Sample calculations are made for X-ray energies between 0.1 and 100 keV and for solid Be, Cu, and Au, representing material matrices consisting of low, medium, and high atomic number elements. A brief discussion on obtaining the required beam divergence and spectral bandwidth from present X-ray sources and optics is given

  5. Effect of photoanode surface coverage by a sensitizer on the photovoltaic performance of titania based CdS quantum dot sensitized solar cells.

    Science.gov (United States)

    Prasad, Rajendra M B; Pathan, Habib M

    2016-04-08

    In spite of the promising design and architecture, quantum dot sensitized solar cells (QDSSCs) have a long way to go before they attain the actual projected photoconversion efficiencies. Such an inferior performance displayed by QDSSCs is primarily because of many unwanted recombination losses of charge carriers at various interfaces of the cell. Electron recombination due to back electron transfer at the photoanode/electrolyte interface is an important one that needs to be addressed, to improve the efficiency of these third generation nanostructured solar cells. The present work highlights the importance of conformal coverage of CdS quantum dots (QDs) on the surface of the nanocrystalline titania photoanode in arresting such recombinations, leading to improvement in the performance of the cells. Using the successive ionic layer adsorption and reaction (SILAR) process, photoanodes are subjected to different amounts of CdS QD sensitization by varying the number of cycles of deposition. The sensitized electrodes are characterized using UV-visible spectroscopy, cyclic voltammetry and transmission electron microscopy to evaluate the extent of surface coverage of titania electrodes by QDs. Sandwich solar cells are then fabricated using these electrodes and characterized employing electrochemical impedance spectroscopy and J-V characteristics. It is observed that maximum solar cell efficiency is obtained for photoanodes with conformal coating of QDs and any further deposition of sensitizer leads to QD aggregation and so reduces the performance of the solar cells.

  6. Allergen Sensitization Pattern by Sex: A Cluster Analysis in Korea.

    Science.gov (United States)

    Ohn, Jungyoon; Paik, Seung Hwan; Doh, Eun Jin; Park, Hyun-Sun; Yoon, Hyun-Sun; Cho, Soyun

    2017-12-01

    Allergens tend to sensitize simultaneously. Etiology of this phenomenon has been suggested to be allergen cross-reactivity or concurrent exposure. However, little is known about specific allergen sensitization patterns. To investigate the allergen sensitization characteristics according to gender. Multiple allergen simultaneous test (MAST) is widely used as a screening tool for detecting allergen sensitization in dermatologic clinics. We retrospectively reviewed the medical records of patients with MAST results between 2008 and 2014 in our Department of Dermatology. A cluster analysis was performed to elucidate the allergen-specific immunoglobulin (Ig)E cluster pattern. The results of MAST (39 allergen-specific IgEs) from 4,360 cases were analyzed. By cluster analysis, 39items were grouped into 8 clusters. Each cluster had characteristic features. When compared with female, the male group tended to be sensitized more frequently to all tested allergens, except for fungus allergens cluster. The cluster and comparative analysis results demonstrate that the allergen sensitization is clustered, manifesting allergen similarity or co-exposure. Only the fungus cluster allergens tend to sensitize female group more frequently than male group.

  7. Preparation and surface modification of hierarchical nanosheets-based ZnO microstructures for dye-sensitized solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Yongming; Lin, Yu, E-mail: linyuyrr@163.com; Lin, Yibing; Yang, Jiyuan

    2014-02-15

    This paper reports a simple one-step hydrothermal route for the preparation of hierarchical nanosheets-based ZnO microstructures and their application to dye-sensitized solar cells. The morphologies of the products were controlled by the dosage of the reactants. Their physical characteristics were detected by X-ray diffraction, a field-emission scanning electron microscope and a surface analyzer. It is proved that the sample of ZnO microspheres with larger surface area and stronger light-trapping capacity since the superiority of their entirely spherical structures exhibits better photoelectrochemical properties than the mixtures of ZnO microspheres and ZnO microflowers. A dye-sensitized solar cell assembled by the ZnO microspheres as photoanode shows an energy conversion efficiency of 2.94% after surface modification by tetrabutyl titanate solution at 90 {sup °}C. This result is over 1.6 times higher than the non-modified cell fabricated by the ZnO microspheres on the basis of the external improvement and the stability enhancement for the dye-sensitized ZnO photoanode. - Graphical abstract: Influences on energy conversion efficiency of the dye-sensitized solar cells assembled by decorating hierarchical nanosheets-based ZnO microstructures with tetrabutyl titanate solution at different temperatures. Display Omitted - Highlights: • Hierarchical nanosheets-based ZnO microstructures were controllably synthesized. • The ZnO microspheres show good optical and electrochemical properties. • The ZnO microspheres were modified by C{sub 16}H{sub 36}O{sub 4}Ti solution. • Remarkable increase of conversion efficiency is observed after surface modification.

  8. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  9. Beyond the GUM: variance-based sensitivity analysis in metrology

    International Nuclear Information System (INIS)

    Lira, I

    2016-01-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand. (paper)

  10. Atmospheric sensitivity to land surface changes: comparing the impact of albedo, roughness, and evaporative resistance on near-surface air temperature using an idealized land model.

    Science.gov (United States)

    Lague, M. M.; Swann, A. L. S.; Bonan, G. B.

    2017-12-01

    Past studies have demonstrated how changes in vegetation can impact the atmosphere; however, it is often difficult to identify the exact physical pathway through which vegetation changes drive an atmospheric response. Surface properties (such as vegetation color, or height) control surface energy fluxes, which feed back on the atmosphere on both local and global scales by modifying temperatures, cloud cover, and energy gradients. Understanding how land surface properties influence energy fluxes is crucial for improving our understanding of how vegetation change - past, present, and future - impacts the atmosphere, global climate, and people. We explore the sensitivity of the atmosphere to perturbations of three land surface properties - albedo, roughness, and evaporative resistance - using an idealized land model coupled to an Earth System Model. We derive a relationship telling us how large a change in each surface property is required to drive a local 0.1 K change in 2m air temperature. Using this idealized framework, we are able to separate the influence on the atmosphere of each individual surface property. We demonstrate that the impact of each surface property on the atmosphere is spatially variable - that is, a similar change in vegetation can have different climate impacts if made in different locations. This analysis not only improves our understanding of how the land system can influence climate, but also provides us with a set of theoretical limits on the potential climate impact of arbitrary vegetation change (natural or anthropogenic).

  11. A monolayer of hierarchical silver hemi-mesoparticles with tunable surface topographies for highly sensitive surface-enhanced Raman spectroscopy

    Science.gov (United States)

    Zhu, Shuangmei; Fan, Chunzhen; Mao, Yanchao; Wang, Junqiao; He, Jinna; Liang, Erjun; Chao, Mingju

    2016-02-01

    We proposed a facile green synthesis system to synthesize large-scale Ag hemi-mesoparticles monolayer on Cu foil. Ag hemi-mesoparticles have different surface morphologies on their surfaces, including ridge-like, meatball-like, and fluffy-like shapes. In the reaction, silver nitrate was reduced by copper at room temperature in dimethyl sulfoxide via the galvanic displacement reaction. The different surface morphologies of the Ag hemi-mesoparticles were adjusted by changing the reaction time, and the hemi-mesoparticle surface formed fluffy-spherical nanoprotrusions at longer reaction time. At the same time, we explored the growth mechanism of silver hemi-mesoparticles with different surface morphologies. With 4-mercaptobenzoic acid as Raman probe molecules, the fluffy-like silver hemi-mesoparticles monolayer with the best activity of surface enhanced Raman scattering (SERS), the enhancement factor is up to 7.33 × 107 and the detection limit can reach 10-10M. SERS measurements demonstrate that these Ag hemi-mesoparticles can serve as sensitive SERS substrates. At the same time, using finite element method, the distribution of the localized electromagnetic field near the particle surface was simulated to verify the enhanced mechanism. This study helps us to understand the relationship between morphology Ag hemi-mesoparicles and the properties of SERS.

  12. Surface computing and collaborative analysis work

    CERN Document Server

    Brown, Judith; Gossage, Stevenson; Hack, Chris

    2013-01-01

    Large surface computing devices (wall-mounted or tabletop) with touch interfaces and their application to collaborative data analysis, an increasingly important and prevalent activity, is the primary topic of this book. Our goals are to outline the fundamentals of surface computing (a still maturing technology), review relevant work on collaborative data analysis, describe frameworks for understanding collaborative processes, and provide a better understanding of the opportunities for research and development. We describe surfaces as display technologies with which people can interact directly, and emphasize how interaction design changes when designing for large surfaces. We review efforts to use large displays, surfaces or mixed display environments to enable collaborative analytic activity. Collaborative analysis is important in many domains, but to provide concrete examples and a specific focus, we frequently consider analysis work in the security domain, and in particular the challenges security personne...

  13. Rethinking Sensitivity Analysis of Nuclear Simulations with Topology

    Energy Technology Data Exchange (ETDEWEB)

    Dan Maljovec; Bei Wang; Paul Rosen; Andrea Alfonsi; Giovanni Pastore; Cristian Rabiti; Valerio Pascucci

    2016-01-01

    In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of the sensitivity information -- inherently lacking in visual encodings -- offering limited effectiveness in communicating and exploring the generated data. In this paper, we design a framework for sensitivity analysis and visualization of multidimensional nuclear simulation data using partition-based, topology-inspired regression models and report on its efficacy. We rely on the established Morse-Smale regression technique, which allows us to partition the domain into monotonic regions where easily interpretable linear models can be used to assess the influence of inputs on the output variability. The underlying computation is augmented with an intuitive and interactive visual design to effectively communicate sensitivity information to the nuclear scientists. Our framework is being deployed into the multi-purpose probabilistic risk assessment and uncertainty quantification framework RAVEN (Reactor Analysis and Virtual Control Environment). We evaluate our framework using an simulation dataset studying nuclear fuel performance.

  14. A microwave resonator for limiting depth sensitivity for electron paramagnetic resonance spectroscopy of surfaces.

    Science.gov (United States)

    Sidabras, Jason W; Varanasi, Shiv K; Mett, Richard R; Swarts, Steven G; Swartz, Harold M; Hyde, James S

    2014-10-01

    A microwave Surface Resonator Array (SRA) structure is described for use in Electron Paramagnetic Resonance (EPR) spectroscopy. The SRA has a series of anti-parallel transmission line modes that provides a region of sensitivity equal to the cross-sectional area times its depth sensitivity, which is approximately half the distance between the transmission line centers. It is shown that the quarter-wave twin-lead transmission line can be a useful element for design of microwave resonators at frequencies as high as 10 GHz. The SRA geometry is presented as a novel resonator for use in surface spectroscopy where the region of interest is either surrounded by lossy material, or the spectroscopist wishes to minimize signal from surrounding materials. One such application is in vivo spectroscopy of human finger-nails at X-band (9.5 GHz) to measure ionizing radiation dosages. In order to reduce losses associated with tissues beneath the nail that yield no EPR signal, the SRA structure is designed to limit depth sensitivity to the thickness of the fingernail. Another application, due to the resonator geometry and limited depth penetration, is surface spectroscopy in coating or material science. To test this application, a spectrum of 1.44 μM of Mg(2+) doped polystyrene 1.1 mm thick on an aluminum surface is obtained. Modeling, design, and simulations were performed using Wolfram Mathematica (Champaign, IL; v. 9.0) and Ansys High Frequency Structure Simulator (HFSS; Canonsburg, PA; v. 15.0). A micro-strip coupling circuit is designed to suppress unwanted modes and provide a balanced impedance transformation to a 50 Ω coaxial input. Agreement between simulated and experimental results is shown.

  15. A microwave resonator for limiting depth sensitivity for electron paramagnetic resonance spectroscopy of surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Sidabras, Jason W.; Varanasi, Shiv K.; Hyde, James S. [Department of Biophysics, Medical College of Wisconsin, Milwaukee, Wisconsin 53211 (United States); Mett, Richard R. [Department of Biophysics, Medical College of Wisconsin, Milwaukee, Wisconsin 53211 (United States); Department of Physics and Chemistry, Milwaukee School of Engineering, Milwaukee, Wisconsin 53202 (United States); Swarts, Steven G. [Department of Radiation Oncology, University of Florida, Gainesville, Florida, 32610 (United States); Swartz, Harold M. [Department of Radiology, Geisel Medical School at Dartmouth, Hanover, New Hampshire 03755 (United States)

    2014-10-15

    A microwave Surface Resonator Array (SRA) structure is described for use in Electron Paramagnetic Resonance (EPR) spectroscopy. The SRA has a series of anti-parallel transmission line modes that provides a region of sensitivity equal to the cross-sectional area times its depth sensitivity, which is approximately half the distance between the transmission line centers. It is shown that the quarter-wave twin-lead transmission line can be a useful element for design of microwave resonators at frequencies as high as 10 GHz. The SRA geometry is presented as a novel resonator for use in surface spectroscopy where the region of interest is either surrounded by lossy material, or the spectroscopist wishes to minimize signal from surrounding materials. One such application is in vivo spectroscopy of human finger-nails at X-band (9.5 GHz) to measure ionizing radiation dosages. In order to reduce losses associated with tissues beneath the nail that yield no EPR signal, the SRA structure is designed to limit depth sensitivity to the thickness of the fingernail. Another application, due to the resonator geometry and limited depth penetration, is surface spectroscopy in coating or material science. To test this application, a spectrum of 1.44 μM of Mg{sup 2+} doped polystyrene 1.1 mm thick on an aluminum surface is obtained. Modeling, design, and simulations were performed using Wolfram Mathematica (Champaign, IL; v. 9.0) and Ansys High Frequency Structure Simulator (HFSS; Canonsburg, PA; v. 15.0). A micro-strip coupling circuit is designed to suppress unwanted modes and provide a balanced impedance transformation to a 50 Ω coaxial input. Agreement between simulated and experimental results is shown.

  16. Systemization of burnup sensitivity analysis code. 2

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2005-02-01

    Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of criticality experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons; the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For

  17. Influence of skew rays on the sensitivity and signal-to-noise ratio of a fiber-optic surface-plasmon-resonance sensor: a theoretical study

    International Nuclear Information System (INIS)

    Dwivedi, Yogendra S.; Sharma, Anuj K.; Gupta, Banshi D.

    2007-01-01

    We have theoretically analyzed the influence of skew rays on the performance of a fiber-optic sensor based on surface plasmon resonance. The performance of the sensor has been evaluated in terms of its sensitivity and signal-to-noise ratio (SNR). The theoretical model for skewness dependence includes the material dispersion in fiber cores and metal layers, simultaneous excitation of skew rays, and meridional rays in the fiber core along with all guided rays launching from a collimated light source. The effect of skew rays on the SNR and the sensitivity of the sensor with two different metals has been compared. The same comparison is carried out for the different values of design parameters such as numerical aperture, fiber core diameter, and the length of the surface-plasmon-resonance (SPR)active sensing region. This detailed analysis for the effect of skewness on the SNR and the sensitivity of the sensor leads us to achieve the best possible performance from a fiber-optic SPR sensor against the skewness in the optical fiber

  18. Research Note: The sensitivity of surface seismic P-wave data in transversely isotropic media to reflector depth

    KAUST Repository

    Alkhalifah, Tariq Ali

    2016-12-17

    The leading component of the high-frequency asymptotic description of the wavefield, given by the travel time, is governed by the eikonal equation. In anisotropic media, traveltime measurements from seismic experiments conducted along one surface cannot constrain the long-wavelength attribute of the medium along the orthogonal-to-the-surface direction, as anisotropy introduces an independent parameter controlling wave propagation in the orthogonal direction. Since travel times measured on the Earth\\'s surface in transversely isotropic media with a vertical symmetry axis are mainly insensitive to the absolute value of the anisotropic parameter responsible for relating these observations to depth δ, the travel time was perturbed laterally to investigate the traveltime sensitivity to lateral variations in δ. This formulation can be used to develop inversion strategies for lateral variations in δ in acoustic transversely isotropic media, as the surface-recorded data are sensitive to it even if the model is described by the normal moveout velocity and horizontal velocity, or the anellipticity parameter η. Numerical tests demonstrate the enhanced sensitivity of our data when the model is parameterised with a lateral change in δ.

  19. Improving the Design of Capacitive Micromachined Ultrasonic Transducers Aided with Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    A Martowicz

    2016-09-01

    Full Text Available The paper presents the results of analysis performed to search for feasible design improvements for capacitive micromachined ultrasonic transducer. Carried out search has been aided with the sensitivity analysis and the application of Response Surface Method. The multiphysics approach has been taken into account in elaborated finite element model of one cell of described transducer in order to include significant physical phenomena present in modelled microdevice. The set of twelve input uncertain and design parameters consists of geometric, material and control properties. The amplitude of dynamic membrane deformation of the transducer has been chosen as studied parameter. The objective of performed study has been defined as the task of finding robust design configurations of the transducer, i.e. characterizing maximal value of deformation amplitude with its minimal variation.

  20. Surface analysis and techniques in biology

    CERN Document Server

    Smentkowski, Vincent S

    2014-01-01

    This book highlights state-of-the-art surface analytical instrumentation, advanced data analysis tools, and the use of complimentary surface analytical instrumentation to perform a complete analysis of biological systems.

  1. Dynamic Resonance Sensitivity Analysis in Wind Farms

    DEFF Research Database (Denmark)

    Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei

    2017-01-01

    (PFs) are calculated by critical eigenvalue sensitivity analysis versus the entries of the MIMO matrix. The PF analysis locates the most exciting bus of the resonances, where can be the best location to install the passive or active filters to reduce the harmonic resonance problems. Time...

  2. A performance test of a new high-surface-quality and high-sensitivity CR-39 plastic nuclear track detector – TechnoTrak

    Energy Technology Data Exchange (ETDEWEB)

    Kodaira, S., E-mail: kodaira.satoshi@qst.go.jp [Radiation Measurement Research Team, National Institute of Radiological Sciences, National Institutes for Quantum and Radiological Science and Technology, Chiba (Japan); Morishige, K. [Research Institute for Science and Engineering, Waseda University, Tokyo (Japan); Kawashima, H.; Kitamura, H.; Kurano, M. [Radiation Measurement Research Team, National Institute of Radiological Sciences, National Institutes for Quantum and Radiological Science and Technology, Chiba (Japan); Hasebe, N. [Research Institute for Science and Engineering, Waseda University, Tokyo (Japan); Koguchi, Y.; Shinozaki, W. [Oarai Research Center, Chiyoda Technol Corporation, Ibaraki (Japan); Ogura, K. [College of Industrial Technology, Nihon University, Chiba (Japan)

    2016-09-15

    We have studied the performance of a newly-commercialized CR-39 plastic nuclear track detector (PNTD), “TechnoTrak”, in energetic heavy ion measurements. The advantages of TechnoTrak are derived from its use of a purified CR-39 monomer to improve surface quality combined with an antioxidant to improve sensitivity to low-linear-energy-transfer (LET) particles. We irradiated these detectors with various heavy ions (from protons to krypton) with various energies (30–500 MeV/u) at the heavy ion accelerator facilities in the National Institute of Radiological Sciences (NIRS). The surface roughness after chemical etching was improved to be 59% of that of the conventional high-sensitivity CR-39 detector (HARZLAS/TD-1). The detectable dynamic range of LET was found to be 3.5–600 keV/μm. The LET and charge resolutions for three ions tested ranged from 5.1% to 1.5% and 0.14 to 0.22 c.u. (charge unit), respectively, in the LET range of 17–230 keV/μm, which represents an improvement over conventional products (HARZLAS/TD-1 and BARYOTRAK). A correction factor for the angular dependence was determined for correcting the LET spectrum in an isotropic radiation field. We have demonstrated the potential of TechnoTrak, with its two key features of high surface quality and high sensitivity to low-LET particles, to improve automatic analysis protocols in radiation dosimetry and various other radiological applications.

  3. The EVEREST project: sensitivity analysis of geological disposal systems

    International Nuclear Information System (INIS)

    Marivoet, Jan; Wemaere, Isabelle; Escalier des Orres, Pierre; Baudoin, Patrick; Certes, Catherine; Levassor, Andre; Prij, Jan; Martens, Karl-Heinz; Roehlig, Klaus

    1997-01-01

    The main objective of the EVEREST project is the evaluation of the sensitivity of the radiological consequences associated with the geological disposal of radioactive waste to the different elements in the performance assessment. Three types of geological host formations are considered: clay, granite and salt. The sensitivity studies that have been carried out can be partitioned into three categories according to the type of uncertainty taken into account: uncertainty in the model parameters, uncertainty in the conceptual models and uncertainty in the considered scenarios. Deterministic as well as stochastic calculational approaches have been applied for the sensitivity analyses. For the analysis of the sensitivity to parameter values, the reference technique, which has been applied in many evaluations, is stochastic and consists of a Monte Carlo simulation followed by a linear regression. For the analysis of conceptual model uncertainty, deterministic and stochastic approaches have been used. For the analysis of uncertainty in the considered scenarios, mainly deterministic approaches have been applied

  4. Multiple predictor smoothing methods for sensitivity analysis: Example results

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  5. Elemental analysis by surface-enhanced Laser-Induced Breakdown Spectroscopy combined with liquid–liquid microextraction

    International Nuclear Information System (INIS)

    Aguirre, M.A.; Legnaioli, S.; Almodóvar, F.; Hidalgo, M.; Palleschi, V.; Canals, A.

    2013-01-01

    In this work, the possibility of using Laser-Induced Breakdown Spectrometry (LIBS) combined with liquid–liquid microextraction techniques is evaluated as a simple and fast method for trace elemental analysis. Two different strategies for LIBS analysis of manganese contained in microdroplets of extraction solvent (Triton X-114) are studied: (i) analysis by direct laser irradiation of microdroplets; and (ii) analysis by laser irradiation of microdroplets dried on metallic substrates (surface-enhanced LIBS — SENLIBS). Experiments were carried out using synthetic samples with different concentrations of manganese in a 10% w/w Triton X-114 matrix. The analysis by direct laser irradiation of microdroplets showed low precision, sensitivity and poor linearity across the concentration range evaluated (R 2 −1 of Mn. - Highlights: ► LIBS combined with microextraction procedures for trace analysis is proposed. ► The proposed combination depends on LIBS ability to analyze sample microvolumes. ► A surface-enhanced LIBS methodology for microdroplet analysis was evaluated. ► Results indicate this combination to be promising for trace analysis in liquids

  6. Application of SWAT99.2 to sensitivity analysis of water balance components in unique plots in a hilly region

    Directory of Open Access Journals (Sweden)

    Jun-feng Dai

    2017-07-01

    Full Text Available Although many sensitivity analyses using the soil and water assessment tool (SWAT in a complex watershed have been conducted, little attention has been paid to the application potential of the model in unique plots. In addition, sensitivity analysis of percolation and evapotranspiration with SWAT has seldom been undertaken. In this study, SWAT99.2 was calibrated to simulate water balance components for unique plots in Southern China from 2000 to 2001, which included surface runoff, percolation, and evapotranspiration. Twenty-one parameters classified into four categories, including meteorological conditions, topographical characteristics, soil properties, and vegetation attributes, were used for sensitivity analysis through one-at-a-time (OAT sampling to identify the factor that contributed most to the variance in water balance components. The results were shown to be different for different plots, with parameter sensitivity indices and ranks varying for different water balance components. Water balance components in the broad-leaved forest and natural grass plots were most sensitive to meteorological conditions, less sensitive to vegetation attributes and soil properties, and least sensitive to topographical characteristics. Compared to those in the natural grass plot, water balance components in the broad-leaved forest plot demonstrated higher sensitivity to the maximum stomatal conductance (GSI and maximum leaf area index (BLAI.

  7. A sensitive electrochemiluminescence cytosensor for quantitative evaluation of epidermal growth factor receptor expressed on cell surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Yanjuan; Zhang, Shaolian; Wen, Qingqing; Huang, Hongxing; Yang, Peihui, E-mail: typh@jnu.edu.cn

    2015-06-30

    Highlights: • EGF-cytosensor was used for evaluating EGFR expression level on cell surfaces. • CdSQDs and EGF were coated on magnetic beads (MBs) for ECL-probe. • Good sensitivity was achieved due to the signal amplification of ECL-probe. - Abstract: A sensitive electrochemiluminescence (ECL) strategy for evaluating the epidermal growth factor receptor (EGFR) expression level on cell surfaces was designed by integrating the specific recognition of EGFR expressed on MCF-7 cell surfaces with an epidermal growth factor (EGF)-funtionalized CdS quantum dots (CdSQDs)-capped magnetic bead (MB) probe. The high sensitivity of ECL probe of EGF-funtionalized CdSQD-capped-MB was used for competitive recognition with EGFR expressed on cell surfaces with recombinant EGFR protein. The changes of ECL intensity depended on both the cell number and the expression level of EGFR receptor on cell surfaces. A wide linear response to cells ranging from 80 to 4 × 10{sup 6} cells mL{sup −1} with a detection limit of 40 cells mL{sup −1} was obtained. The EGF-cytosensor was used to evaluate EGFR expression levels on MCF-7 cells, and the average number of EGFR receptor on single MCF-7 cells was 1.35 × 10{sup 5} with the relative standard deviation of 4.3%. This strategy was further used for in-situ and real-time evaluating EGFR receptor expressed on cell surfaces in response to drugs stimulation at different concentration and incubation time. The proposed method provided potential applications in the detection of receptors on cancer cells and anticancer drugs screening.

  8. Probability density adjoint for sensitivity analysis of the Mean of Chaos

    Energy Technology Data Exchange (ETDEWEB)

    Blonigan, Patrick J., E-mail: blonigan@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu

    2014-08-01

    Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.

  9. Analysis of painted arts by energy sensitive radiographic techniques with the Pixel Detector Timepix

    International Nuclear Information System (INIS)

    Zemlicka, J; Jakubek, J; Kroupa, M; Hradil, D; Hradilova, J; Mislerova, H

    2011-01-01

    Non-invasive techniques utilizing X-ray radiation offer a significant advantage in scientific investigations of painted arts and other cultural artefacts such as painted artworks or statues. In addition, there is also great demand for a mobile analytical and real-time imaging device given the fact that many fine arts cannot be transported. The highly sensitive hybrid semiconductor pixel detector, Timepix, is capable of detecting and resolving subtle and low-contrast differences in the inner composition of a wide variety of objects. Moreover, it is able to map the surface distribution of the contained elements. Several transmission and emission techniques are presented which have been proposed and tested for the analysis of painted artworks. This study focuses on the novel techniques of X-ray transmission radiography (conventional and energy sensitive) and X-ray induced fluorescence imaging (XRF) which can be realised at the table-top scale with the state-of-the-art pixel detector Timepix. Transmission radiography analyses the changes in the X-ray beam intensity caused by specific attenuation of different components in the sample. The conventional approach uses all energies from the source spectrum for the creation of the image while the energy sensitive alternative creates images in given energy intervals which enable identification and separation of materials. The XRF setup is based on the detection of characteristic radiation induced by X-ray photons through a pinhole geometry collimator. The XRF method is extremely sensitive to the material composition but it creates only surface maps of the elemental distribution. For the purpose of the analysis several sets of painted layers have been prepared in a restoration laboratory. The composition of these layers corresponds to those of real historical paintings from the 19 th century. An overview of the current status of our methods will be given with respect to the instrumentation and the application in the field of

  10. Analysis of painted arts by energy sensitive radiographic techniques with the Pixel Detector Timepix

    Energy Technology Data Exchange (ETDEWEB)

    Zemlicka, J; Jakubek, J; Kroupa, M [Institute of Experimental and Applied Physics, Czech Technical University Prague, Horska 3a/22, 128 00 Prague 2 (Czech Republic); Hradil, D [Institute of Inorganic Chemistry, AS CR, v.v.i., ALMA, 50 68 Husinec-Oeez (Czech Republic); Hradilova, J; Mislerova, H, E-mail: jan.zemlicka@utef.cvut.cz [Academy of Fine Arts in Prague, ALMA, U Akademie 4, 170 2, Prague 7 (Czech Republic)

    2011-01-15

    Non-invasive techniques utilizing X-ray radiation offer a significant advantage in scientific investigations of painted arts and other cultural artefacts such as painted artworks or statues. In addition, there is also great demand for a mobile analytical and real-time imaging device given the fact that many fine arts cannot be transported. The highly sensitive hybrid semiconductor pixel detector, Timepix, is capable of detecting and resolving subtle and low-contrast differences in the inner composition of a wide variety of objects. Moreover, it is able to map the surface distribution of the contained elements. Several transmission and emission techniques are presented which have been proposed and tested for the analysis of painted artworks. This study focuses on the novel techniques of X-ray transmission radiography (conventional and energy sensitive) and X-ray induced fluorescence imaging (XRF) which can be realised at the table-top scale with the state-of-the-art pixel detector Timepix. Transmission radiography analyses the changes in the X-ray beam intensity caused by specific attenuation of different components in the sample. The conventional approach uses all energies from the source spectrum for the creation of the image while the energy sensitive alternative creates images in given energy intervals which enable identification and separation of materials. The XRF setup is based on the detection of characteristic radiation induced by X-ray photons through a pinhole geometry collimator. The XRF method is extremely sensitive to the material composition but it creates only surface maps of the elemental distribution. For the purpose of the analysis several sets of painted layers have been prepared in a restoration laboratory. The composition of these layers corresponds to those of real historical paintings from the 19{sup th} century. An overview of the current status of our methods will be given with respect to the instrumentation and the application in the field

  11. O2 Plasma Etching and Antistatic Gun Surface Modifications for CNT Yarn Microelectrode Improve Sensitivity and Antifouling Properties.

    Science.gov (United States)

    Yang, Cheng; Wang, Ying; Jacobs, Christopher B; Ivanov, Ilia N; Venton, B Jill

    2017-05-16

    Carbon nanotube (CNT) based microelectrodes exhibit rapid and selective detection of neurotransmitters. While different fabrication strategies and geometries of CNT microelectrodes have been characterized, relatively little research has investigated ways to selectively enhance their electrochemical properties. In this work, we introduce two simple, reproducible, low-cost, and efficient surface modification methods for carbon nanotube yarn microelectrodes (CNTYMEs): O 2 plasma etching and antistatic gun treatment. O 2 plasma etching was performed by a microwave plasma system with oxygen gas flow and the optimized time for treatment was 1 min. The antistatic gun treatment flows ions by the electrode surface; two triggers of the antistatic gun was the optimized number on the CNTYME surface. Current for dopamine at CNTYMEs increased 3-fold after O 2 plasma etching and 4-fold after antistatic gun treatment. When the two treatments were combined, the current increased 12-fold, showing the two effects are due to independent mechanisms that tune the surface properties. O 2 plasma etching increased the sensitivity due to increased surface oxygen content but did not affect surface roughness while the antistatic gun treatment increased surface roughness but not oxygen content. The effect of tissue fouling on CNT yarns was studied for the first time, and the relatively hydrophilic surface after O 2 plasma etching provided better resistance to fouling than unmodified or antistatic gun treated CNTYMEs. Overall, O 2 plasma etching and antistatic gun treatment improve the sensitivity of CNTYMEs by different mechanisms, providing the possibility to tune the CNTYME surface and enhance sensitivity.

  12. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    Science.gov (United States)

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  13. Sensitivity analysis and power for instrumental variable studies.

    Science.gov (United States)

    Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S

    2018-03-31

    In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.

  14. A self-amplified transistor immunosensor under dual gate operation: highly sensitive detection of hepatitis B surface antigen

    Science.gov (United States)

    Lee, I.-K.; Jeun, M.; Jang, H.-J.; Cho, W.-J.; Lee, K. H.

    2015-10-01

    Ion-sensitive field-effect transistors (ISFETs), although they have attracted considerable attention as effective immunosensors, have still not been adopted for practical applications owing to several problems: (1) the poor sensitivity caused by the short Debye screening length in media with high ion concentration, (2) time-consuming preconditioning processes for achieving the highly-diluted media, and (3) the low durability caused by undesirable ions such as sodium chloride in the media. Here, we propose a highly sensitive immunosensor based on a self-amplified transistor under dual gate operation (immuno-DG ISFET) for the detection of hepatitis B surface antigen. To address the challenges in current ISFET-based immunosensors, we have enhanced the sensitivity of an immunosensor by precisely tailoring the nanostructure of the transistor. In the pH sensing test, the immuno-DG ISFET showed superior sensitivity (2085.53 mV per pH) to both standard ISFET under single gate operation (58.88 mV per pH) and DG ISFET with a non-tailored transistor (381.14 mV per pH). Moreover, concerning the detection of hepatitis B surface antigens (HBsAg) using the immuno-DG ISFET, we have successfully detected trace amounts of HBsAg (22.5 fg mL-1) in a non-diluted 1× PBS medium with a high sensitivity of 690 mV. Our results demonstrate that the proposed immuno-DG ISFET can be a biosensor platform for practical use in the diagnosis of various diseases.Ion-sensitive field-effect transistors (ISFETs), although they have attracted considerable attention as effective immunosensors, have still not been adopted for practical applications owing to several problems: (1) the poor sensitivity caused by the short Debye screening length in media with high ion concentration, (2) time-consuming preconditioning processes for achieving the highly-diluted media, and (3) the low durability caused by undesirable ions such as sodium chloride in the media. Here, we propose a highly sensitive immunosensor

  15. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  16. Sensitivity analysis of the reactor safety study. Final report

    International Nuclear Information System (INIS)

    Parkinson, W.J.; Rasmussen, N.C.; Hinkle, W.D.

    1979-01-01

    The Reactor Safety Study (RSS) or Wash 1400 developed a methodology estimating the public risk from light water nuclear reactors. In order to give further insights into this study, a sensitivity analysis has been performed to determine the significant contributors to risk for both the PWR and BWR. The sensitivity to variation of the point values of the failure probabilities reported in the RSS was determined for the safety systems identified therein, as well as for many of the generic classes from which individual failures contributed to system failures. Increasing as well as decreasing point values were considered. An analysis of the sensitivity to increasing uncertainty in system failure probabilities was also performed. The sensitivity parameters chosen were release category probabilities, core melt probability, and the risk parameters of early fatalities, latent cancers and total property damage. The latter three are adequate for describing all public risks identified in the RSS. The results indicate reductions of public risk by less than a factor of two for factor reductions in system or generic failure probabilities as high as one hundred. There also appears to be more benefit in monitoring the most sensitive systems to verify adherence to RSS failure rates than to backfitting present reactors. The sensitivity analysis results do indicate, however, possible benefits in reducing human error rates

  17. Sensitivity analysis for contagion effects in social networks

    Science.gov (United States)

    VanderWeele, Tyler J.

    2014-01-01

    Analyses of social network data have suggested that obesity, smoking, happiness and loneliness all travel through social networks. Individuals exert “contagion effects” on one another through social ties and association. These analyses have come under critique because of the possibility that homophily from unmeasured factors may explain these statistical associations and because similar findings can be obtained when the same methodology is applied to height, acne and head-aches, for which the conclusion of contagion effects seems somewhat less plausible. We use sensitivity analysis techniques to assess the extent to which supposed contagion effects for obesity, smoking, happiness and loneliness might be explained away by homophily or confounding and the extent to which the critique using analysis of data on height, acne and head-aches is relevant. Sensitivity analyses suggest that contagion effects for obesity and smoking cessation are reasonably robust to possible latent homophily or environmental confounding; those for happiness and loneliness are somewhat less so. Supposed effects for height, acne and head-aches are all easily explained away by latent homophily and confounding. The methodology that has been employed in past studies for contagion effects in social networks, when used in conjunction with sensitivity analysis, may prove useful in establishing social influence for various behaviors and states. The sensitivity analysis approach can be used to address the critique of latent homophily as a possible explanation of associations interpreted as contagion effects. PMID:25580037

  18. Sensitivity of Climate Simulations to Land-Surface and Atmospheric Boundary-Layer Treatments-A Review.

    Science.gov (United States)

    Garratt, J. R.

    1993-03-01

    Aspects of the land-surface and boundary-layer treatments in some 20 or so atmospheric general circulation models (GCMS) are summarized. In only a small fraction of these have significant sensitivity studies been carried out and published. Predominantly, the sensitivity studies focus upon the parameterization of land-surface processes and specification of land-surface properties-the most important of these include albedo, roughness length, soil moisture status, and vegetation density. The impacts of surface albedo and soil moisture upon the climate simulated in GCMs with bare-soil land surfaces are well known. Continental evaporation and precipitation tend to decrease with increased albedo and decreased soil moisture availability. For example, results from numerous studies give an average decrease in continental precipitation of 1 mm day1 in response to an average albedo increase of 0.13. Few conclusive studies have been carried out on the impact of a gross roughness-length change-the primary study included an important statistical assessment of the impact upon the mean July climate around the globe of a decreased continental roughness (by three orders of magnitude). For example, such a decrease reduced the precipitation over Amazonia by 1 to 2 mm day1.The inclusion of a canopy scheme in a GCM ensures the combined impacts of roughness (canopies tend to be rougher than bare soil), albedo (canopies tend to be less reflective than bare soil), and soil-moisture availability (canopies prevent the near-surface soil region from drying out and can access the deep soil moisture) upon the simulated climate. The most revealing studies to date involve the regional impact of Amazonian deforestation. The results of four such studies show that replacing tropical forest with a degraded pasture results in decreased evaporation ( 1 mm day1) and precipitation (1-2 mm day1), and increased near-surface air temperatures (2 K).Sensitivity studies as a whole suggest the need for a

  19. Sensitivity analysis of an Advanced Gas-cooled Reactor control rod model

    International Nuclear Information System (INIS)

    Scott, M.; Green, P.L.; O’Driscoll, D.; Worden, K.; Sims, N.D.

    2016-01-01

    Highlights: • A model was made of the AGR control rod mechanism. • The aim was to better understand the performance when shutting down the reactor. • The model showed good agreement with test data. • Sensitivity analysis was carried out. • The results demonstrated the robustness of the system. - Abstract: A model has been made of the primary shutdown system of an Advanced Gas-cooled Reactor nuclear power station. The aim of this paper is to explore the use of sensitivity analysis techniques on this model. The two motivations for performing sensitivity analysis are to quantify how much individual uncertain parameters are responsible for the model output uncertainty, and to make predictions about what could happen if one or several parameters were to change. Global sensitivity analysis techniques were used based on Gaussian process emulation; the software package GEM-SA was used to calculate the main effects, the main effect index and the total sensitivity index for each parameter and these were compared to local sensitivity analysis results. The results suggest that the system performance is resistant to adverse changes in several parameters at once.

  20. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  1. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  2. Probabilistic and sensitivity analysis of Botlek Bridge structures

    Directory of Open Access Journals (Sweden)

    Králik Juraj

    2017-01-01

    Full Text Available This paper deals with the probabilistic and sensitivity analysis of the largest movable lift bridge of the world. The bridge system consists of six reinforced concrete pylons and two steel decks 4000 tons weight each connected through ropes with counterweights. The paper focuses the probabilistic and sensitivity analysis as the base of dynamic study in design process of the bridge. The results had a high importance for practical application and design of the bridge. The model and resistance uncertainties were taken into account in LHS simulation method.

  3. Enhanced photovoltaic performance of Sb2S3-sensitized solar cells through surface treatments

    Science.gov (United States)

    Ye, Qing; Xu, Yafeng; Chen, Wenyong; Yang, Shangfeng; Zhu, Jun; Weng, Jian

    2018-05-01

    Efficient antimony sulfide (Sb2S3)-sensitized solar cells were obtained by a sequential treatment with thioacetamide (TA) and 1-decylphosphonic acid (DPA). Compared with the untreated Sb2S3-sensitized solar cells, the power conversion efficiency of the treated Sb2S3 solar cells was improved by 1.80% to 3.23%. The TA treatment improved the Sb2S3 films by reducing impurities and decreasing the film's surface defects, which inhibited the emergence of recombination centers. The DPA treatment reduced the recombination between hole transport materials (HTMs) and the Sb2S3. Therefore, we have presented an efficient strategy to improve the performance of Sb2S3-sensitized solar cells.

  4. Measurements of skin friction in water using surface stress sensitive films

    International Nuclear Information System (INIS)

    Crafton, J W; Fonov, S D; Jones, E G; Goss, L P; Forlines, R A; Fontaine, A

    2008-01-01

    The measurement of skin friction on hydrodynamic surfaces is of significant value for the design of advanced naval technology, particularly at high Reynolds numbers. Here we report on the development of a new sensor for measurement of skin friction and pressure that operates in both air and water. This sensor is based on an elastic polymer film that deforms under the action of applied normal and tangential loads. Skin friction and pressure gradients are determined by monitoring these deformations and then solving an inverse problem using a finite element model of the elastic film. This technique is known as surface stress sensitive films. In this paper, we describe the development of a sensor package specifically designed for two-dimensional skin friction measurements at a single point. The package has been developed with the goal of making two-dimensional measurements of skin friction in water. Quantitative measurements of skin friction are performed on a high Reynolds number turbulent boundary layer in the 12 inch water tunnel at Penn State University. These skin friction measurements are verified by comparing them to measurements obtained with a drag plate as well as by performing two-dimensional velocity measurements above the sensor using a laser Doppler velocimetry system. The results indicate that the sensor skin friction measurements are accurate to better than 5% and repeatable to better than 2%. The directional sensitivity of the sensor is demonstrated by positioning the sensor at several orientations to the flow. A final interesting feature of this sensor is that it is sensitive to pressure gradients, not to static pressure changes. This feature should prove useful for monitoring the skin friction on a seafaring vessel as the operating depth is changed

  5. Experimental Design for Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2001-01-01

    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as

  6. Understanding dynamics using sensitivity analysis: caveat and solution

    Science.gov (United States)

    2011-01-01

    Background Parametric sensitivity analysis (PSA) has become one of the most commonly used tools in computational systems biology, in which the sensitivity coefficients are used to study the parametric dependence of biological models. As many of these models describe dynamical behaviour of biological systems, the PSA has subsequently been used to elucidate important cellular processes that regulate this dynamics. However, in this paper, we show that the PSA coefficients are not suitable in inferring the mechanisms by which dynamical behaviour arises and in fact it can even lead to incorrect conclusions. Results A careful interpretation of parametric perturbations used in the PSA is presented here to explain the issue of using this analysis in inferring dynamics. In short, the PSA coefficients quantify the integrated change in the system behaviour due to persistent parametric perturbations, and thus the dynamical information of when a parameter perturbation matters is lost. To get around this issue, we present a new sensitivity analysis based on impulse perturbations on system parameters, which is named impulse parametric sensitivity analysis (iPSA). The inability of PSA and the efficacy of iPSA in revealing mechanistic information of a dynamical system are illustrated using two examples involving switch activation. Conclusions The interpretation of the PSA coefficients of dynamical systems should take into account the persistent nature of parametric perturbations involved in the derivation of this analysis. The application of PSA to identify the controlling mechanism of dynamical behaviour can be misleading. By using impulse perturbations, introduced at different times, the iPSA provides the necessary information to understand how dynamics is achieved, i.e. which parameters are essential and when they become important. PMID:21406095

  7. Pressurized thermal shock probabilistic fracture mechanics sensitivity analysis for Yankee Rowe reactor pressure vessel

    International Nuclear Information System (INIS)

    Dickson, T.L.; Cheverton, R.D.; Bryson, J.W.; Bass, B.R.; Shum, D.K.M.; Keeney, J.A.

    1993-08-01

    The Nuclear Regulatory Commission (NRC) requested Oak Ridge National Laboratory (ORNL) to perform a pressurized-thermal-shock (PTS) probabilistic fracture mechanics (PFM) sensitivity analysis for the Yankee Rowe reactor pressure vessel, for the fluences corresponding to the end of operating cycle 22, using a specific small-break-loss- of-coolant transient as the loading condition. Regions of the vessel with distinguishing features were to be treated individually -- upper axial weld, lower axial weld, circumferential weld, upper plate spot welds, upper plate regions between the spot welds, lower plate spot welds, and the lower plate regions between the spot welds. The fracture analysis methods used in the analysis of through-clad surface flaws were those contained in the established OCA-P computer code, which was developed during the Integrated Pressurized Thermal Shock (IPTS) Program. The NRC request specified that the OCA-P code be enhanced for this study to also calculate the conditional probabilities of failure for subclad flaws and embedded flaws. The results of this sensitivity analysis provide the NRC with (1) data that could be used to assess the relative influence of a number of key input parameters in the Yankee Rowe PTS analysis and (2) data that can be used for readily determining the probability of vessel failure once a more accurate indication of vessel embrittlement becomes available. This report is designated as HSST report No. 117

  8. Near-surface compressional and shear wave speeds constrained by body-wave polarization analysis

    Science.gov (United States)

    Park, Sunyoung; Ishii, Miaki

    2018-06-01

    A new technique to constrain near-surface seismic structure that relates body-wave polarization direction to the wave speed immediately beneath a seismic station is presented. The P-wave polarization direction is only sensitive to shear wave speed but not to compressional wave speed, while the S-wave polarization direction is sensitive to both wave speeds. The technique is applied to data from the High-Sensitivity Seismograph Network in Japan, and the results show that the wave speed estimates obtained from polarization analysis are compatible with those from borehole measurements. The lateral variations in wave speeds correlate with geological and physical features such as topography and volcanoes. The technique requires minimal computation resources, and can be used on any number of three-component teleseismic recordings, opening opportunities for non-invasive and inexpensive study of the shallowest (˜100 m) crustal structures.

  9. The hidden radiation chemistry in plasma modification and XPS analysis of polymer surfaces

    International Nuclear Information System (INIS)

    George, G.A.; Le, T.T.; Elms, F.M.; Wood, B.J.

    1996-01-01

    Full text: The surface modification of polymers using plasma treatments is being widely researched to achieve changes in the surface energetics and consequent wetting and reactivity for a range of applications. These include i) adhesion for polymer bonding and composite material fabrication and ii) biocompatibility of polymers when used as orthopedic implants, catheters and prosthetics. A low pressure rf plasma produces a variety of species from the introduced gas which may react with the surface of a hydrocarbon polymer, such as polyethylene. In the case of 0 2 and H 2 0, these species include oxygen atoms, singlet molecular oxygen and hydroxyl radicals, all of which may oxidise and, depending on their energy, ablate the polymer surface. In order to better understand the reactive species formed both in and downstream from a plasma and the relative contributions of oxidation and ablation, self-assembled monolayers of n-alkane thiols on gold are being used as well characterised substrates for quantitative X-ray photoelectron spectroscopy (XPS). The identification and quantification of oxidised carbon species on plasma treated polymers from broad, asymmetric XPS signals is difficult, so derivatisation is often used to enhance sensitivity and specificity. For example, trifluoroacetic anhydride (TFAA) selectively labels hydroxyl functionality. The surface analysis of a modified polymer surface may be confounded by high energy radiation chemistry which may occur during XPS analysis. Examples include scission of carbon-halogen bonds (as in TFM adducts), decarboxylation and main-chain polyene formation. The extent of free-radical chemistry occurring in polyethylene while undergoing XPS analysis may be seen by both ESR and FT-IR analysis

  10. An adaptive Mantel-Haenszel test for sensitivity analysis in observational studies.

    Science.gov (United States)

    Rosenbaum, Paul R; Small, Dylan S

    2017-06-01

    In a sensitivity analysis in an observational study with a binary outcome, is it better to use all of the data or to focus on subgroups that are expected to experience the largest treatment effects? The answer depends on features of the data that may be difficult to anticipate, a trade-off between unknown effect-sizes and known sample sizes. We propose a sensitivity analysis for an adaptive test similar to the Mantel-Haenszel test. The adaptive test performs two highly correlated analyses, one focused analysis using a subgroup, one combined analysis using all of the data, correcting for multiple testing using the joint distribution of the two test statistics. Because the two component tests are highly correlated, this correction for multiple testing is small compared with, for instance, the Bonferroni inequality. The test has the maximum design sensitivity of two component tests. A simulation evaluates the power of a sensitivity analysis using the adaptive test. Two examples are presented. An R package, sensitivity2x2xk, implements the procedure. © 2016, The International Biometric Society.

  11. Sensitivity analysis for improving nanomechanical photonic transducers biosensors

    International Nuclear Information System (INIS)

    Fariña, D; Álvarez, M; Márquez, S; Lechuga, L M; Dominguez, C

    2015-01-01

    The achievement of high sensitivity and highly integrated transducers is one of the main challenges in the development of high-throughput biosensors. The aim of this study is to improve the final sensitivity of an opto-mechanical device to be used as a reliable biosensor. We report the analysis of the mechanical and optical properties of optical waveguide microcantilever transducers, and their dependency on device design and dimensions. The selected layout (geometry) based on two butt-coupled misaligned waveguides displays better sensitivities than an aligned one. With this configuration, we find that an optimal microcantilever thickness range between 150 nm and 400 nm would increase both microcantilever bending during the biorecognition process and increase optical sensitivity to 4.8   ×   10 −2  nm −1 , an order of magnitude higher than other similar opto-mechanical devices. Moreover, the analysis shows that a single mode behaviour of the propagating radiation is required to avoid modal interference that could misinterpret the readout signal. (paper)

  12. Sensitivity Analysis of Weather Variables on Offsite Consequence Analysis Tools in South Korea and the United States

    Directory of Open Access Journals (Sweden)

    Min-Uk Kim

    2018-05-01

    Full Text Available We studied sensitive weather variables for consequence analysis, in the case of chemical leaks on the user side of offsite consequence analysis (OCA tools. We used OCA tools Korea Offsite Risk Assessment (KORA and Areal Location of Hazardous Atmospheres (ALOHA in South Korea and the United States, respectively. The chemicals used for this analysis were 28% ammonia (NH3, 35% hydrogen chloride (HCl, 50% hydrofluoric acid (HF, and 69% nitric acid (HNO3. The accident scenarios were based on leakage accidents in storage tanks. The weather variables were air temperature, wind speed, humidity, and atmospheric stability. Sensitivity analysis was performed using the Statistical Package for the Social Sciences (SPSS program for dummy regression analysis. Sensitivity analysis showed that impact distance was not sensitive to humidity. Impact distance was most sensitive to atmospheric stability, and was also more sensitive to air temperature than wind speed, according to both the KORA and ALOHA tools. Moreover, the weather variables were more sensitive in rural conditions than in urban conditions, with the ALOHA tool being more influenced by weather variables than the KORA tool. Therefore, if using the ALOHA tool instead of the KORA tool in rural conditions, users should be careful not to cause any differences in impact distance due to input errors of weather variables, with the most sensitive one being atmospheric stability.

  13. Sensitivity Analysis of Weather Variables on Offsite Consequence Analysis Tools in South Korea and the United States.

    Science.gov (United States)

    Kim, Min-Uk; Moon, Kyong Whan; Sohn, Jong-Ryeul; Byeon, Sang-Hoon

    2018-05-18

    We studied sensitive weather variables for consequence analysis, in the case of chemical leaks on the user side of offsite consequence analysis (OCA) tools. We used OCA tools Korea Offsite Risk Assessment (KORA) and Areal Location of Hazardous Atmospheres (ALOHA) in South Korea and the United States, respectively. The chemicals used for this analysis were 28% ammonia (NH₃), 35% hydrogen chloride (HCl), 50% hydrofluoric acid (HF), and 69% nitric acid (HNO₃). The accident scenarios were based on leakage accidents in storage tanks. The weather variables were air temperature, wind speed, humidity, and atmospheric stability. Sensitivity analysis was performed using the Statistical Package for the Social Sciences (SPSS) program for dummy regression analysis. Sensitivity analysis showed that impact distance was not sensitive to humidity. Impact distance was most sensitive to atmospheric stability, and was also more sensitive to air temperature than wind speed, according to both the KORA and ALOHA tools. Moreover, the weather variables were more sensitive in rural conditions than in urban conditions, with the ALOHA tool being more influenced by weather variables than the KORA tool. Therefore, if using the ALOHA tool instead of the KORA tool in rural conditions, users should be careful not to cause any differences in impact distance due to input errors of weather variables, with the most sensitive one being atmospheric stability.

  14. Discrimination of surface wear on obsidian tools using LSCM and RelA: pilot study results (area-scale analysis of obsidian tool surfaces).

    Science.gov (United States)

    Stemp, W James; Chung, Steven

    2011-01-01

    This pilot study tests the reliability of laser scanning confocal microscopy (LSCM) to quantitatively measure wear on experimental obsidian tools. To our knowledge, this is the first use of confocal microscopy to study wear on stone flakes made from an amorphous silicate like obsidian. Three-dimensional surface roughness or texture area scans on three obsidian flakes used on different contact materials (hide, shell, wood) were documented using the LSCM to determine whether the worn surfaces could be discriminated using area-scale analysis, specifically relative area (RelA). When coupled with the F-test, this scale-sensitive fractal analysis could not only discriminate the used from unused surfaces on individual tools, but was also capable of discriminating the wear histories of tools used on different contact materials. Results indicate that such discriminations occur at different scales. Confidence levels for the discriminations at different scales were established using the F-test (mean square ratios or MSRs). In instances where discrimination of surface roughness or texture was not possible above the established confidence level based on MSRs, photomicrographs and RelA assisted in hypothesizing why this was so. Copyright © 2011 Wiley Periodicals, Inc.

  15. Rainfall-induced fecal indicator organisms transport from manured fields: model sensitivity analysis.

    Science.gov (United States)

    Martinez, Gonzalo; Pachepsky, Yakov A; Whelan, Gene; Yakirevich, Alexander M; Guber, Andrey; Gish, Timothy J

    2014-02-01

    Microbial quality of surface waters attracts attention due to food- and waterborne disease outbreaks. Fecal indicator organisms (FIOs) are commonly used for the microbial pollution level evaluation. Models predicting the fate and transport of FIOs are required to design and evaluate best management practices that reduce the microbial pollution in ecosystems and water sources and thus help to predict the risk of food and waterborne diseases. In this study we performed a sensitivity analysis for the KINEROS/STWIR model developed to predict the FIOs transport out of manured fields to other fields and water bodies in order to identify input variables that control the transport uncertainty. The distributions of model input parameters were set to encompass values found from three-year experiments at the USDA-ARS OPE3 experimental site in Beltsville and publicly available information. Sobol' indices and complementary regression trees were used to perform the global sensitivity analysis of the model and to explore the interactions between model input parameters on the proportion of FIO removed from fields. Regression trees provided a useful visualization of the differences in sensitivity of the model output in different parts of the input variable domain. Environmental controls such as soil saturation, rainfall duration and rainfall intensity had the largest influence in the model behavior, whereas soil and manure properties ranked lower. The field length had only moderate effect on the model output sensitivity to the model inputs. Among the manure-related properties the parameter determining the shape of the FIO release kinetic curve had the largest influence on the removal of FIOs from the fields. That underscored the need to better characterize the FIO release kinetics. Since the most sensitive model inputs are available in soil and weather databases or can be obtained using soil water models, results indicate the opportunity of obtaining large-scale estimates of FIO

  16. Dye-Sensitized Solar Cells Based on High Surface Area Nanocrystalline Zinc Oxide Spheres

    Directory of Open Access Journals (Sweden)

    Pavuluri Srinivasu

    2011-01-01

    Full Text Available High surface area nanocrystalline zinc oxide material is fabricated using mesoporous nanostructured carbon as a sacrificial template through combustion process. The resulting material is characterized by XRD, N2 adsorption, HR-SEM, and HR-TEM. The nitrogen adsorption measurement indicates that the materials possess BET specific surface area ca. 30 m2/g. Electron microscopy images prove that the zinc oxide spheres possess particle size in the range of 0.12 μm–0.17 μm. The nanocrystalline zinc oxide spheres show 1.0% of energy conversion efficiency for dye-sensitized solar cells.

  17. Variability in surface infrared reflectance of thirteen nitrile rubber gloves at key wavelengths for analysis of captan.

    Science.gov (United States)

    Phalen, R N; Que Hee, Shane S

    2007-02-01

    The aim of this study was to investigate the surface variability of 13 powder-free, unlined, and unsupported nitrile rubber gloves using attenuated total reflection Fourier transform infrared (ATR-FT-IR) spectrophotometry at key wavelengths for analysis of captan contamination. The within-glove, within-lot, and between-lot variability was measured at 740, 1124, 1252, and 1735 cm(-1), the characteristic captan reflectance minima wavelengths. Three glove brands were assessed after conditioning overnight at relative humidity (RH) values ranging from 2 +/- 1 to 87 +/- 4% and temperatures ranging from -8.6 +/- 0.7 to 59.2 +/- 0.9 degrees C. For all gloves, 1735 cm(-1) provided the lowest background absorbance and greatest potential sensitivity for captan analysis on the outer glove surface: absorbances ranged from 0.0074 +/- 0.0005 (Microflex) to 0.0195 +/- 0.0024 (SafeSkin); average within-glove coefficients of variation (CV) ranged from 2.7% (Best, range 0.9-5.3%) to 10% (SafeSkin, 1.2-17%); within-glove CVs greater than 10% were for one brand (SafeSkin); within-lot CVs ranged from 2.8% (Best N-Dex) to 28% (SafeSkin Blue); and between-lot variation was statistically significant (p < or = 0.05) for all but two SafeSkin lots. The RH had variable effects dependent on wavelength, being minimal at 1735, 1252, and 1124 cm(-1) and highest at 3430 cm(-1) (O-H stretch region). There was no significant effect of temperature conditioning. Substantial within-glove, within-lot, and between-lot variability was observed. Thus, surface analysis using ATR-FT-IR must treat glove brands and lots as different. ATR-FT-IR proved to be a useful real-time analytical tool for measuring glove variability, detecting surface humidity effects, and choosing selective and sensitive wavelengths for analysis of nonvolatile surface contaminants.

  18. Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2007-01-01

    Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.

  19. Three-dimensional optimization and sensitivity analysis of dental implant thread parameters using finite element analysis.

    Science.gov (United States)

    Geramizadeh, Maryam; Katoozian, Hamidreza; Amid, Reza; Kadkhodazadeh, Mahdi

    2018-04-01

    This study aimed to optimize the thread depth and pitch of a recently designed dental implant to provide uniform stress distribution by means of a response surface optimization method available in finite element (FE) software. The sensitivity of simulation to different mechanical parameters was also evaluated. A three-dimensional model of a tapered dental implant with micro-threads in the upper area and V-shaped threads in the rest of the body was modeled and analyzed using finite element analysis (FEA). An axial load of 100 N was applied to the top of the implants. The model was optimized for thread depth and pitch to determine the optimal stress distribution. In this analysis, micro-threads had 0.25 to 0.3 mm depth and 0.27 to 0.33 mm pitch, and V-shaped threads had 0.405 to 0.495 mm depth and 0.66 to 0.8 mm pitch. The optimized depth and pitch were 0.307 and 0.286 mm for micro-threads and 0.405 and 0.808 mm for V-shaped threads, respectively. In this design, the most effective parameters on stress distribution were the depth and pitch of the micro-threads based on sensitivity analysis results. Based on the results of this study, the optimal implant design has micro-threads with 0.307 and 0.286 mm depth and pitch, respectively, in the upper area and V-shaped threads with 0.405 and 0.808 mm depth and pitch in the rest of the body. These results indicate that micro-thread parameters have a greater effect on stress and strain values.

  20. Assessment of the contamination of drinking water supply wells by pesticides from surface water resources using a finite element reactive transport model and global sensitivity analysis techniques

    DEFF Research Database (Denmark)

    Malaguerra, Flavio; Albrechtsen, Hans-Jørgen; Binning, Philip John

    2013-01-01

    A reactive transport model is employed to evaluate the potential for contamination of drinking water wells by surface water pollution. The model considers various geologic settings, includes sorption and degradation processes and is tested by comparison with data from a tracer experiment where...... fluorescein dye injected in a river is monitored at nearby drinking water wells. Three compounds were considered: an older pesticide MCPP (Mecoprop) which is mobile and relatively persistent, glyphosate (Roundup), a newer biodegradable and strongly sorbing pesticide, and its degradation product AMPA. Global...... sensitivity analysis using the Morris method is employed to identify the dominant model parameters. Results show that the characteristics of clay aquitards (degree of fracturing and thickness), pollutant properties and well depths are crucial factors when evaluating the risk of drinking water well...

  1. Comparison of global sensitivity analysis methods – Application to fuel behavior modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ikonen, Timo, E-mail: timo.ikonen@vtt.fi

    2016-02-15

    Highlights: • Several global sensitivity analysis methods are compared. • The methods’ applicability to nuclear fuel performance simulations is assessed. • The implications of large input uncertainties and complex models are discussed. • Alternative strategies to perform sensitivity analyses are proposed. - Abstract: Fuel performance codes have two characteristics that make their sensitivity analysis challenging: large uncertainties in input parameters and complex, non-linear and non-additive structure of the models. The complex structure of the code leads to interactions between inputs that show as cross terms in the sensitivity analysis. Due to the large uncertainties of the inputs these interactions are significant, sometimes even dominating the sensitivity analysis. For the same reason, standard linearization techniques do not usually perform well in the analysis of fuel performance codes. More sophisticated methods are typically needed in the analysis. To this end, we compare the performance of several sensitivity analysis methods in the analysis of a steady state FRAPCON simulation. The comparison of importance rankings obtained with the various methods shows that even the simplest methods can be sufficient for the analysis of fuel maximum temperature. However, the analysis of the gap conductance requires more powerful methods that take into account the interactions of the inputs. In some cases, moment-independent methods are needed. We also investigate the computational cost of the various methods and present recommendations as to which methods to use in the analysis.

  2. PHOTOGRAMMETRIC TECHNIQUES FOR ROAD SURFACE ANALYSIS

    Directory of Open Access Journals (Sweden)

    V. A. Knyaz

    2016-06-01

    Full Text Available The quality and condition of a road surface is of great importance for convenience and safety of driving. So the investigations of the behaviour of road materials in laboratory conditions and monitoring of existing roads are widely fulfilled for controlling a geometric parameters and detecting defects in the road surface. Photogrammetry as accurate non-contact measuring method provides powerful means for solving different tasks in road surface reconstruction and analysis. The range of dimensions concerned in road surface analysis can have great variation from tenths of millimetre to hundreds meters and more. So a set of techniques is needed to meet all requirements of road parameters estimation. Two photogrammetric techniques for road surface analysis are presented: for accurate measuring of road pavement and for road surface reconstruction based on imagery obtained from unmanned aerial vehicle. The first technique uses photogrammetric system based on structured light for fast and accurate surface 3D reconstruction and it allows analysing the characteristics of road texture and monitoring the pavement behaviour. The second technique provides dense 3D model road suitable for road macro parameters estimation.

  3. Probabilistic sensitivity analysis of system availability using Gaussian processes

    International Nuclear Information System (INIS)

    Daneshkhah, Alireza; Bedford, Tim

    2013-01-01

    The availability of a system under a given failure/repair process is a function of time which can be determined through a set of integral equations and usually calculated numerically. We focus here on the issue of carrying out sensitivity analysis of availability to determine the influence of the input parameters. The main purpose is to study the sensitivity of the system availability with respect to the changes in the main parameters. In the simplest case that the failure repair process is (continuous time/discrete state) Markovian, explicit formulae are well known. Unfortunately, in more general cases availability is often a complicated function of the parameters without closed form solution. Thus, the computation of sensitivity measures would be time-consuming or even infeasible. In this paper, we show how Sobol and other related sensitivity measures can be cheaply computed to measure how changes in the model inputs (failure/repair times) influence the outputs (availability measure). We use a Bayesian framework, called the Bayesian analysis of computer code output (BACCO) which is based on using the Gaussian process as an emulator (i.e., an approximation) of complex models/functions. This approach allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than other methods. The emulator-based sensitivity measure is used to examine the influence of the failure and repair densities' parameters on the system availability. We discuss how to apply the methods practically in the reliability context, considering in particular the selection of parameters and prior distributions and how we can ensure these may be considered independent—one of the key assumptions of the Sobol approach. The method is illustrated on several examples, and we discuss the further implications of the technique for reliability and maintenance analysis

  4. Structure and sensitivity analysis of individual-based predator–prey models

    International Nuclear Information System (INIS)

    Imron, Muhammad Ali; Gergs, Andre; Berger, Uta

    2012-01-01

    The expensive computational cost of sensitivity analyses has hampered the use of these techniques for analysing individual-based models in ecology. A relatively cheap computational cost, referred to as the Morris method, was chosen to assess the relative effects of all parameters on the model’s outputs and to gain insights into predator–prey systems. Structure and results of the sensitivity analysis of the Sumatran tiger model – the Panthera Population Persistence (PPP) and the Notonecta foraging model (NFM) – were compared. Both models are based on a general predation cycle and designed to understand the mechanisms behind the predator–prey interaction being considered. However, the models differ significantly in their complexity and the details of the processes involved. In the sensitivity analysis, parameters that directly contribute to the number of prey items killed were found to be most influential. These were the growth rate of prey and the hunting radius of tigers in the PPP model as well as attack rate parameters and encounter distance of backswimmers in the NFM model. Analysis of distances in both of the models revealed further similarities in the sensitivity of the two individual-based models. The findings highlight the applicability and importance of sensitivity analyses in general, and screening design methods in particular, during early development of ecological individual-based models. Comparison of model structures and sensitivity analyses provides a first step for the derivation of general rules in the design of predator–prey models for both practical conservation and conceptual understanding. - Highlights: ► Structure of predation processes is similar in tiger and backswimmer model. ► The two individual-based models (IBM) differ in space formulations. ► In both models foraging distance is among the sensitive parameters. ► Morris method is applicable for the sensitivity analysis even of complex IBMs.

  5. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  6. The Volatility of Data Space: Topology Oriented Sensitivity Analysis

    Science.gov (United States)

    Du, Jing; Ligmann-Zielinska, Arika

    2015-01-01

    Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929

  7. Application of sensitivity analysis for optimized piping support design

    International Nuclear Information System (INIS)

    Tai, K.; Nakatogawa, T.; Hisada, T.; Noguchi, H.; Ichihashi, I.; Ogo, H.

    1993-01-01

    The objective of this study was to see if recent developments in non-linear sensitivity analysis could be applied to the design of nuclear piping systems which use non-linear supports and to develop a practical method of designing such piping systems. In the study presented in this paper, the seismic response of a typical piping system was analyzed using a dynamic non-linear FEM and a sensitivity analysis was carried out. Then optimization for the design of the piping system supports was investigated, selecting the support location and yield load of the non-linear supports (bi-linear model) as main design parameters. It was concluded that the optimized design was a matter of combining overall system reliability with the achievement of an efficient damping effect from the non-linear supports. The analysis also demonstrated sensitivity factors are useful in the planning stage of support design. (author)

  8. Least squares shadowing sensitivity analysis of a modified Kuramoto–Sivashinsky equation

    International Nuclear Information System (INIS)

    Blonigan, Patrick J.; Wang, Qiqi

    2014-01-01

    Highlights: •Modifying the Kuramoto–Sivashinsky equation and changing its boundary conditions make it an ergodic dynamical system. •The modified Kuramoto–Sivashinsky equation exhibits distinct dynamics for three different ranges of system parameters. •Least squares shadowing sensitivity analysis computes accurate gradients for a wide range of system parameters. - Abstract: Computational methods for sensitivity analysis are invaluable tools for scientists and engineers investigating a wide range of physical phenomena. However, many of these methods fail when applied to chaotic systems, such as the Kuramoto–Sivashinsky (K–S) equation, which models a number of different chaotic systems found in nature. The following paper discusses the application of a new sensitivity analysis method developed by the authors to a modified K–S equation. We find that least squares shadowing sensitivity analysis computes accurate gradients for solutions corresponding to a wide range of system parameters

  9. Imaging and chemical surface analysis of biomolecular functionalization of monolithically integrated on silicon Mach-Zehnder interferometric immunosensors

    International Nuclear Information System (INIS)

    Gajos, Katarzyna; Angelopoulou, Michailia; Petrou, Panagiota; Awsiuk, Kamil; Kakabakos, Sotirios; Haasnoot, Willem; Bernasik, Andrzej; Rysz, Jakub; Marzec, Mateusz M.; Misiakos, Konstantinos; Raptis, Ioannis; Budkowski, Andrzej

    2016-01-01

    Highlights: • Optimization of probe immobilization with robotic spotter printing overlapping spots. • In-situ inspection of microstructured surfaces of biosensors integrated on silicon. • Imaging and chemical analysis of immobilization, surface blocking and immunoreaction. • Insight with molecular discrimination into step-by-step sensor surface modifications. • Optimized biofunctionalization improves sensor sensitivity and response repeatability. - Abstract: Time-of-flight secondary ion mass spectrometry (imaging, micro-analysis) has been employed to evaluate biofunctionalization of the sensing arm areas of Mach-Zehnder interferometers monolithically integrated on silicon chips for the immunochemical (competitive) detection of bovine κ-casein in goat milk. Biosensor surfaces are examined after: modification with (3-aminopropyl)triethoxysilane, application of multiple overlapping spots of κ-casein solutions, blocking with 100-times diluted goat milk, and reaction with monoclonal mouse anti-κ-casein antibodies in blocking solution. The areas spotted with κ-casein solutions of different concentrations are examined and optimum concentration providing homogeneous coverage is determined. Coverage of biosensor surfaces with biomolecules after each of the sequential steps employed in immunodetection is also evaluated with TOF-SIMS, supplemented by Atomic force microscopy and X-ray photoelectron spectroscopy. Uniform molecular distributions are observed on the sensing arm areas after spotting with optimum κ-casein concentration, blocking and immunoreaction. The corresponding biomolecular compositions are determined with a Principal Component Analysis that distinguished between protein amino acids and milk glycerides, as well as between amino acids characteristic for Mabs and κ-casein, respectively. Use of the optimum conditions (κ-casein concentration) for functionalization of chips with arrays of ten Mach-Zehnder interferometers provided on-chips assays

  10. Imaging and chemical surface analysis of biomolecular functionalization of monolithically integrated on silicon Mach-Zehnder interferometric immunosensors

    Energy Technology Data Exchange (ETDEWEB)

    Gajos, Katarzyna, E-mail: kasia.fornal@uj.edu.pl [M. Smoluchowski Institute of Physics, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków (Poland); Angelopoulou, Michailia; Petrou, Panagiota [Institute of Nuclear & Radiological Sciences & Technology, Energy & Safety, NCSR Demokritos, P. Grigoriou & Neapoleos St, Aghia Paraksevi 15310, Athens (Greece); Awsiuk, Kamil [M. Smoluchowski Institute of Physics, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków (Poland); Kakabakos, Sotirios [Institute of Nuclear & Radiological Sciences & Technology, Energy & Safety, NCSR Demokritos, P. Grigoriou & Neapoleos St, Aghia Paraksevi 15310, Athens (Greece); Haasnoot, Willem [RIKILT Wageningen UR, Akkermaalsbos 2, 6708 WB Wageningen (Netherlands); Bernasik, Andrzej [Faculty of Physics and Applied Computer Science, AGH University of Science and Technology, Mickiewicza 30, 30-059 Kraków (Poland); Academic Centre for Materials and Nanotechnology, AGH University of Science and Technology, Mickiewicza 30, 30-059 Kraków (Poland); Rysz, Jakub [M. Smoluchowski Institute of Physics, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków (Poland); Marzec, Mateusz M. [Academic Centre for Materials and Nanotechnology, AGH University of Science and Technology, Mickiewicza 30, 30-059 Kraków (Poland); Misiakos, Konstantinos; Raptis, Ioannis [Department of Microelectronics, Institute of Nanoscience and Nanotechnology, NCSR Demokritos, P. Grigoriou & Neapoleos St, Aghia Paraksevi 15310, Athens (Greece); Budkowski, Andrzej [M. Smoluchowski Institute of Physics, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków (Poland)

    2016-11-01

    Highlights: • Optimization of probe immobilization with robotic spotter printing overlapping spots. • In-situ inspection of microstructured surfaces of biosensors integrated on silicon. • Imaging and chemical analysis of immobilization, surface blocking and immunoreaction. • Insight with molecular discrimination into step-by-step sensor surface modifications. • Optimized biofunctionalization improves sensor sensitivity and response repeatability. - Abstract: Time-of-flight secondary ion mass spectrometry (imaging, micro-analysis) has been employed to evaluate biofunctionalization of the sensing arm areas of Mach-Zehnder interferometers monolithically integrated on silicon chips for the immunochemical (competitive) detection of bovine κ-casein in goat milk. Biosensor surfaces are examined after: modification with (3-aminopropyl)triethoxysilane, application of multiple overlapping spots of κ-casein solutions, blocking with 100-times diluted goat milk, and reaction with monoclonal mouse anti-κ-casein antibodies in blocking solution. The areas spotted with κ-casein solutions of different concentrations are examined and optimum concentration providing homogeneous coverage is determined. Coverage of biosensor surfaces with biomolecules after each of the sequential steps employed in immunodetection is also evaluated with TOF-SIMS, supplemented by Atomic force microscopy and X-ray photoelectron spectroscopy. Uniform molecular distributions are observed on the sensing arm areas after spotting with optimum κ-casein concentration, blocking and immunoreaction. The corresponding biomolecular compositions are determined with a Principal Component Analysis that distinguished between protein amino acids and milk glycerides, as well as between amino acids characteristic for Mabs and κ-casein, respectively. Use of the optimum conditions (κ-casein concentration) for functionalization of chips with arrays of ten Mach-Zehnder interferometers provided on-chips assays

  11. IASI's sensitivity to near-surface carbon monoxide (CO): Theoretical analyses and retrievals on test cases

    Science.gov (United States)

    Bauduin, Sophie; Clarisse, Lieven; Theunissen, Michael; George, Maya; Hurtmans, Daniel; Clerbaux, Cathy; Coheur, Pierre-François

    2017-03-01

    Separating concentrations of carbon monoxide (CO) in the boundary layer from the rest of the atmosphere with nadir satellite measurements is of particular importance to differentiate emission from transport. Although thermal infrared (TIR) satellite sounders are considered to have limited sensitivity to the composition of the near-surface atmosphere, previous studies show that they can provide information on CO close to the ground in case of high thermal contrast. In this work we investigate the capability of IASI (Infrared Atmospheric Sounding Interferometer) to retrieve near-surface CO concentrations, and we quantitatively assess the influence of thermal contrast on such retrievals. We present a 3-part analysis, which relies on both theoretical forward simulations and retrievals on real data, performed for a large range of negative and positive thermal contrast situations. First, we derive theoretically the IASI detection threshold of CO enhancement in the boundary layer, and we assess its dependence on thermal contrast. Then, using the optimal estimation formalism, we quantify the role of thermal contrast on the error budget and information content of near-surface CO retrievals. We demonstrate that, contrary to what is usually accepted, large negative thermal contrast values (ground cooler than air) lead to a better decorrelation between CO concentrations in the low and the high troposphere than large positive thermal contrast (ground warmer than the air). In the last part of the paper we use Mexico City and Barrow as test cases to contrast our theoretical predictions with real retrievals, and to assess the accuracy of IASI surface CO retrievals through comparisons to ground-based in-situ measurements.

  12. Sensitivity analysis of the nuclear data for MYRRHA reactor modelling

    International Nuclear Information System (INIS)

    Stankovskiy, Alexey; Van den Eynde, Gert; Cabellos, Oscar; Diez, Carlos J.; Schillebeeckx, Peter; Heyse, Jan

    2014-01-01

    A global sensitivity analysis of effective neutron multiplication factor k eff to the change of nuclear data library revealed that JEFF-3.2T2 neutron-induced evaluated data library produces closer results to ENDF/B-VII.1 than does JEFF-3.1.2. The analysis of contributions of individual evaluations into k eff sensitivity allowed establishing the priority list of nuclides for which uncertainties on nuclear data must be improved. Detailed sensitivity analysis has been performed for two nuclides from this list, 56 Fe and 238 Pu. The analysis was based on a detailed survey of the evaluations and experimental data. To track the origin of the differences in the evaluations and their impact on k eff , the reaction cross-sections and multiplicities in one evaluation have been substituted by the corresponding data from other evaluations. (authors)

  13. Automated Measurement for Sensitivity Analysis of Runoff-Sediment Load at Varying Surface Gradients

    Directory of Open Access Journals (Sweden)

    Imanogor P.A.

    2015-07-01

    Full Text Available Direct measurement of surface runoff is often associated with errors and inaccuracies which results to unreliable hydrological data. An automatic Runoff-meter using tipping buckets arrangement calibrated to tip 0.14 liter of runoff water per tip with an accuracy of ± 0.001 litre was used to measure surface runoff from a steel bounded soil tray of dimension (1200 mm X 900 mm X 260 mm filled with sand loamy to the depth of 130 mm and inclined at angle (0 0 , 5 0 ,12 0 and 15 0 horizontal to the instrument. The effect of varying angles of inclination on runoff intensity, sediment loss rate and sediment loss is significant at 5 % confidence level, while surface runoff is not significant at 5 % confidence level. Total highest sediment loss of 458.2 g and 313.4 g were observed at angle 15 0 and 12 0 respectively. Total surface runoff of 361.5 mm and 445.8 mm were generated at inclined angle of 0 0 and 5 0 , while at angle 12 0 and 15 0 , 564.3 mm and 590.0 mm of surface runoff were generated. In addition, runoff intensity and sediment loss rate were highest at angle 15 0 , while the lowest values of 1.5mm/min and 5.43 g/min were obtained at angle of inclination 5 0 . The results showed that strong relationship existed among the hydrological variables as a result of subjecting the steel bounded soil tray to different angles of inclination. Such results would provide useful data for the running of physics-based deterministic model of surface runoff and erosion which will be useful for the design of hydrological structures, land use planning and management.

  14. An introduction to sensitivity analysis for unobserved confounding in nonexperimental prevention research.

    Science.gov (United States)

    Liu, Weiwei; Kuramoto, S Janet; Stuart, Elizabeth A

    2013-12-01

    Despite the fact that randomization is the gold standard for estimating causal relationships, many questions in prevention science are often left to be answered through nonexperimental studies because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most nonexperimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example, we examine the sensitivity of the association between maternal suicide and offspring's risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall, the association between maternal suicide and offspring's hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for nonexperimental studies. The implementation of sensitivity analysis can help increase confidence in results from nonexperimental studies and better inform prevention researchers and policy makers regarding potential intervention targets.

  15. Sensitivity analysis of numerical solutions for environmental fluid problems

    International Nuclear Information System (INIS)

    Tanaka, Nobuatsu; Motoyama, Yasunori

    2003-01-01

    In this study, we present a new numerical method to quantitatively analyze the error of numerical solutions by using the sensitivity analysis. If a reference case of typical parameters is one calculated with the method, no additional calculation is required to estimate the results of the other numerical parameters such as more detailed solutions. Furthermore, we can estimate the strict solution from the sensitivity analysis results and can quantitatively evaluate the reliability of the numerical solution by calculating the numerical error. (author)

  16. A sensitivity analysis on seismic tomography data with respect to CO2 saturation of a CO2 geological sequestration field

    Science.gov (United States)

    Park, Chanho; Nguyen, Phung K. T.; Nam, Myung Jin; Kim, Jongwook

    2013-04-01

    Monitoring CO2 migration and storage in geological formations is important not only for the stability of geological sequestration of CO2 but also for efficient management of CO2 injection. Especially, geophysical methods can make in situ observation of CO2 to assess the potential leakage of CO2 and to improve reservoir description as well to monitor development of geologic discontinuity (i.e., fault, crack, joint, etc.). Geophysical monitoring can be based on wireline logging or surface surveys for well-scale monitoring (high resolution and nallow area of investigation) or basin-scale monitoring (low resolution and wide area of investigation). In the meantime, crosswell tomography can make reservoir-scale monitoring to bridge the resolution gap between well logs and surface measurements. This study focuses on reservoir-scale monitoring based on crosswell seismic tomography aiming describe details of reservoir structure and monitoring migration of reservoir fluid (water and CO2). For the monitoring, we first make a sensitivity analysis on crosswell seismic tomography data with respect to CO2 saturation. For the sensitivity analysis, Rock Physics Models (RPMs) are constructed by calculating the values of density and P and S-wave velocities of a virtual CO2 injection reservoir. Since the seismic velocity of the reservoir accordingly changes as CO2 saturation changes when the CO2 saturation is less than about 20%, while when the CO2 saturation is larger than 20%, the seismic velocity is insensitive to the change, sensitivity analysis is mainly made when CO2 saturation is less than 20%. For precise simulation of seismic tomography responses for constructed RPMs, we developed a time-domain 2D elastic modeling based on finite difference method with a staggered grid employing a boundary condition of a convolutional perfectly matched layer. We further make comparison between sensitivities of seismic tomography and surface measurements for RPMs to analysis resolution

  17. Sensitivity Analysis of Different Infiltration Equations and Their Coefficients under Various Initial Soil Moisture and Ponding Depth

    Directory of Open Access Journals (Sweden)

    ali javadi

    2015-06-01

    Full Text Available Infiltration is a complex process that changed by initial moisture and water head on the soil surface. The main objective of this study was to estimate the coefficients of infiltration equations, Kostiakov-Lewis, Philip and Horton, and evaluate the sensitivity of these equations and their coefficients under various initial conditions (initial moisture soil and boundary (water head on soil surface. Therefore, one-and two-dimensional infiltration for basin (or border irrigation were simulated by changing the initial soil moisture and water head on soil surface from irrigation to other irrigation using the solution of the Richards’ equation (HYDRUS model. To determine the coefficients of infiltration equations, outputs of the HYDRUS model (cumulative infiltration over time were fitted using the Excel Solver. Comparison of infiltration sensitivity equations and their coefficients in one-and two-dimensional infiltration showed infiltration equations and their sensitivity coefficients were similar function but quantitatively in most cases sensitive two-dimensional equations and their coefficients were greater than one dimension. In both dimensions the soil adsorption coefficient Philip equation as the sensitive coefficient and Horton equation as the sensitive equation under various initial moisture soil and water head on soil surface were identified.

  18. The application of sensitivity analysis to models of large scale physiological systems

    Science.gov (United States)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  19. TEMAC, Top Event Sensitivity Analysis

    International Nuclear Information System (INIS)

    Iman, R.L.; Shortencarier, M.J.

    1988-01-01

    1 - Description of program or function: TEMAC is designed to permit the user to easily estimate risk and to perform sensitivity and uncertainty analyses with a Boolean expression such as produced by the SETS computer program. SETS produces a mathematical representation of a fault tree used to model system unavailability. In the terminology of the TEMAC program, such a mathematical representation is referred to as a top event. The analysis of risk involves the estimation of the magnitude of risk, the sensitivity of risk estimates to base event probabilities and initiating event frequencies, and the quantification of the uncertainty in the risk estimates. 2 - Method of solution: Sensitivity and uncertainty analyses associated with top events involve mathematical operations on the corresponding Boolean expression for the top event, as well as repeated evaluations of the top event in a Monte Carlo fashion. TEMAC employs a general matrix approach which provides a convenient general form for Boolean expressions, is computationally efficient, and allows large problems to be analyzed. 3 - Restrictions on the complexity of the problem - Maxima of: 4000 cut sets, 500 events, 500 values in a Monte Carlo sample, 16 characters in an event name. These restrictions are implemented through the FORTRAN 77 PARAMATER statement

  20. Sensitivity and uncertainty analysis applied to a repository in rock salt

    International Nuclear Information System (INIS)

    Polle, A.N.

    1996-12-01

    This document describes the sensitivity and uncertainty analysis with UNCSAM, as applied to a repository in rock salt for the EVEREST project. UNCSAM is a dedicated software package for sensitivity and uncertainty analysis, which was already used within the preceding PROSA project. The use of UNCSAM provides a flexible interface to EMOS ECN by substituting the sampled values in the various input files to be used by EMOS ECN ; the model calculations for this repository were performed with the EMOS ECN code. Preceding the sensitivity and uncertainty analysis, a number of preparations has been carried out to facilitate EMOS ECN with the probabilistic input data. For post-processing the EMOS ECN results, the characteristic output signals were processed. For the sensitivity and uncertainty analysis with UNCSAM the stochastic input, i.e. sampled values, and the output for the various EMOS ECN runs have been analyzed. (orig.)

  1. A framework for 2-stage global sensitivity analysis of GastroPlus™ compartmental models.

    Science.gov (United States)

    Scherholz, Megerle L; Forder, James; Androulakis, Ioannis P

    2018-04-01

    Parameter sensitivity and uncertainty analysis for physiologically based pharmacokinetic (PBPK) models are becoming an important consideration for regulatory submissions, requiring further evaluation to establish the need for global sensitivity analysis. To demonstrate the benefits of an extensive analysis, global sensitivity was implemented for the GastroPlus™ model, a well-known commercially available platform, using four example drugs: acetaminophen, risperidone, atenolol, and furosemide. The capabilities of GastroPlus were expanded by developing an integrated framework to automate the GastroPlus graphical user interface with AutoIt and for execution of the sensitivity analysis in MATLAB ® . Global sensitivity analysis was performed in two stages using the Morris method to screen over 50 parameters for significant factors followed by quantitative assessment of variability using Sobol's sensitivity analysis. The 2-staged approach significantly reduced computational cost for the larger model without sacrificing interpretation of model behavior, showing that the sensitivity results were well aligned with the biopharmaceutical classification system. Both methods detected nonlinearities and parameter interactions that would have otherwise been missed by local approaches. Future work includes further exploration of how the input domain influences the calculated global sensitivity measures as well as extending the framework to consider a whole-body PBPK model.

  2. Sensitivity of Global Methane Bayesian Inversion to Surface Observation Data Sets and Chemical-Transport Model Resolution

    Science.gov (United States)

    Lew, E. J.; Butenhoff, C. L.; Karmakar, S.; Rice, A. L.; Khalil, A. K.

    2017-12-01

    Methane is the second most important greenhouse gas after carbon dioxide. In efforts to control emissions, a careful examination of the methane budget and source strengths is required. To determine methane surface fluxes, Bayesian methods are often used to provide top-down constraints. Inverse modeling derives unknown fluxes using observed methane concentrations, a chemical transport model (CTM) and prior information. The Bayesian inversion reduces prior flux uncertainties by exploiting information content in the data. While the Bayesian formalism produces internal error estimates of source fluxes, systematic or external errors that arise from user choices in the inversion scheme are often much larger. Here we examine model sensitivity and uncertainty of our inversion under different observation data sets and CTM grid resolution. We compare posterior surface fluxes using the data product GLOBALVIEW-CH4 against the event-level molar mixing ratio data available from NOAA. GLOBALVIEW-CH4 is a collection of CH4 concentration estimates from 221 sites, collected by 12 laboratories, that have been interpolated and extracted to provide weekly records from 1984-2008. Differently, the event-level NOAA data records methane mixing ratios field measurements from 102 sites, containing sampling frequency irregularities and gaps in time. Furthermore, the sampling platform types used by the data sets may influence the posterior flux estimates, namely fixed surface, tower, ship and aircraft sites. To explore the sensitivity of the posterior surface fluxes to the observation network geometry, inversions composed of all sites, only aircraft, only ship, only tower and only fixed surface sites, are performed and compared. Also, we investigate the sensitivity of the error reduction associated with the resolution of the GEOS-Chem simulation (4°×5° vs 2°×2.5°) used to calculate the response matrix. Using a higher resolution grid decreased the model-data error at most sites, thereby

  3. Sensitivity analysis of dynamic characteristic of the fixture based on design variables

    International Nuclear Information System (INIS)

    Wang Dongsheng; Nong Shaoning; Zhang Sijian; Ren Wanfa

    2002-01-01

    The research on the sensitivity analysis is dealt with of structural natural frequencies to structural design parameters. A typical fixture for vibration test is designed. Using I-DEAS Finite Element programs, the sensitivity of its natural frequency to design parameters is analyzed by Matrix Perturbation Method. The research result shows that the sensitivity analysis is a fast and effective dynamic re-analysis method to dynamic design and parameters modification of complex structures such as fixtures

  4. Justification of investment projects of biogas systems by the sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Perebijnos Vasilij Ivanovich

    2015-06-01

    Full Text Available Methodical features of sensitivity analysis application for evaluation of biogas plants investment projects are shown in the article. Risk factors of the indicated investment projects have been studied. Methodical basis for the use of sensitivity analysis and calculation of elasticity coefficient has been worked out. Calculation of sensitivity analysis and elasticity coefficient of three biogas plants projects, which differ in direction of biogas transformation: use in co-generation plant, application of biomethane as motor fuel and resulting carbon dioxide as marketable product, has been made. Factors strongly affecting projects efficiency have been revealed.

  5. *Corresponding Author Sensitivity Analysis of a Physiochemical ...

    African Journals Online (AJOL)

    Michael Horsfall

    The numerical method of sensitivity or the principle of parsimony ... analysis is a widely applied numerical method often being used in the .... Chemical Engineering Journal 128(2-3), 85-93. Amod S ... coupled 3-PG and soil organic matter.

  6. Sensitivity analysis of time-dependent laminar flows

    International Nuclear Information System (INIS)

    Hristova, H.; Etienne, S.; Pelletier, D.; Borggaard, J.

    2004-01-01

    This paper presents a general sensitivity equation method (SEM) for time dependent incompressible laminar flows. The SEM accounts for complex parameter dependence and is suitable for a wide range of problems. The formulation is verified on a problem with a closed form solution obtained by the method of manufactured solution. Systematic grid convergence studies confirm the theoretical rates of convergence in both space and time. The methodology is then applied to pulsatile flow around a square cylinder. Computations show that the flow starts with symmetrical vortex shedding followed by a transition to the traditional Von Karman street (alternate vortex shedding). Simulations show that the transition phase manifests itself earlier in the sensitivity fields than in the flow field itself. Sensitivities are then demonstrated for fast evaluation of nearby flows and uncertainty analysis. (author)

  7. A Novel Experimental Set-Up for Improving the Sensitivity of SV Waves to Shallow Surface-Breaking Cracks

    Energy Technology Data Exchange (ETDEWEB)

    Pecorari, Claudio [Royal Inst. of Technology, Stockholm (Sweden). Dept. of Aeronautical and Vehicle Engineering

    2006-03-15

    Conventional inspection procedures to detect surface-breaking defects in train axels and thick pipes often employ 45-degree incidence shear vertical (SV) waves as probing tool. Recently obtained theoretical and experimental results indicate that this method is considerably less sensitivity to shallow surface-breaking defects, than the one in which the angle of incidence is selected to be close to the critical angle of the longitudinal wave. This project has confirmed this thesis by experimentally investigating the backscattering of SV waves by surface-breaking cracks as a function o t the angle of incidence. To this end, three cracks of depth approximately equal to 0.3 mm, 0.5 mm and 0.7 were introduced on the surface of steel samples with a thickness of 47 mm. These cracks were insonified with transducers operating at 2.25 MHz, 3.5 MHz, and 5 MHz, which correspond to wavelengths in steel of 1.38 mm, 0.88 mm, and 0.62 mm, respectively. The increase in sensitivity has been assessed in the order of 15 dB.

  8. Sensitivity analysis of LOFT L2-5 test calculations

    International Nuclear Information System (INIS)

    Prosek, Andrej

    2014-01-01

    The uncertainty quantification of best-estimate code predictions is typically accompanied by a sensitivity analysis, in which the influence of the individual contributors to uncertainty is determined. The objective of this study is to demonstrate the improved fast Fourier transform based method by signal mirroring (FFTBM-SM) for the sensitivity analysis. The sensitivity study was performed for the LOFT L2-5 test, which simulates the large break loss of coolant accident. There were 14 participants in the BEMUSE (Best Estimate Methods-Uncertainty and Sensitivity Evaluation) programme, each performing a reference calculation and 15 sensitivity runs of the LOFT L2-5 test. The important input parameters varied were break area, gap conductivity, fuel conductivity, decay power etc. For the influence of input parameters on the calculated results the FFTBM-SM was used. The only difference between FFTBM-SM and original FFTBM is that in the FFTBM-SM the signals are symmetrized to eliminate the edge effect (the so called edge is the difference between the first and last data point of one period of the signal) in calculating average amplitude. It is very important to eliminate unphysical contribution to the average amplitude, which is used as a figure of merit for input parameter influence on output parameters. The idea is to use reference calculation as 'experimental signal', 'sensitivity run' as 'calculated signal', and average amplitude as figure of merit for sensitivity instead for code accuracy. The larger is the average amplitude the larger is the influence of varied input parameter. The results show that with FFTBM-SM the analyst can get good picture of the contribution of the parameter variation to the results. They show when the input parameters are influential and how big is this influence. FFTBM-SM could be also used to quantify the influence of several parameter variations on the results. However, the influential parameters could not be

  9. Analysis of gravity data using trend surfaces

    Science.gov (United States)

    Asimopolos, Natalia-Silvia; Asimopolos, Laurentiu

    2013-04-01

    In this paper we have developed algorithms and related software programs for calculating of trend surfaces of higher order. These methods of analysis of trends, like mobile media applications are filtration systems for geophysical data in surface. In particular we presented few case studies for gravity data and gravity maps. Analysis with polynomial trend surfaces contributes to the recognition, isolation and measurement of trends that can be represented by surfaces or hyper-surfaces (in several sizes), thus achieving a separation in regional variations and local variations. This separation is achieved by adjusting the trend function at different values. Trend surfaces using the regression analysis satisfy the criterion of least squares. The difference between the surface of trend and the observed value in a certain point is the residual value. Residual sum of squares of these values should be minimal as the criterion of least squares. The trend surface is considered as regional or large-scale and the residual value will be regarded as local or small-scale component. Removing the regional trend has the effect of highlighting local components represented by residual values. Surface analysis and hyper-surfaces principles are applied to the surface trend and any number of dimensions. For hyper-surfaces we can work with polynomial functions with four or more variables (three variables of space and other variables for interest parameters) that have great importance in some applications. In the paper we presented the mathematical developments about generalized trend surfaces and case studies about gravimetric data. The trend surfaces have the great advantage that the effect of regional anomalies can be expressed as analytic functions. These tendency surfaces allows subsequent mathematical processing and interesting generalizations, with great advantage to work with polynomial functions compared with the original discrete data. For gravity data we estimate the depth of

  10. Importance measures in global sensitivity analysis of nonlinear models

    International Nuclear Information System (INIS)

    Homma, Toshimitsu; Saltelli, Andrea

    1996-01-01

    The present paper deals with a new method of global sensitivity analysis of nonlinear models. This is based on a measure of importance to calculate the fractional contribution of the input parameters to the variance of the model prediction. Measures of importance in sensitivity analysis have been suggested by several authors, whose work is reviewed in this article. More emphasis is given to the developments of sensitivity indices by the Russian mathematician I.M. Sobol'. Given that Sobol' treatment of the measure of importance is the most general, his formalism is employed throughout this paper where conceptual and computational improvements of the method are presented. The computational novelty of this study is the introduction of the 'total effect' parameter index. This index provides a measure of the total effect of a given parameter, including all the possible synergetic terms between that parameter and all the others. Rank transformation of the data is also introduced in order to increase the reproducibility of the method. These methods are tested on a few analytical and computer models. The main conclusion of this work is the identification of a sensitivity analysis methodology which is both flexible, accurate and informative, and which can be achieved at reasonable computational cost

  11. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    Science.gov (United States)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  12. Adjoint sensitivity analysis of high frequency structures with Matlab

    CERN Document Server

    Bakr, Mohamed; Demir, Veysel

    2017-01-01

    This book covers the theory of adjoint sensitivity analysis and uses the popular FDTD (finite-difference time-domain) method to show how wideband sensitivities can be efficiently estimated for different types of materials and structures. It includes a variety of MATLAB® examples to help readers absorb the content more easily.

  13. System reliability assessment via sensitivity analysis in the Markov chain scheme

    International Nuclear Information System (INIS)

    Gandini, A.

    1988-01-01

    Methods for reliability sensitivity analysis in the Markov chain scheme are presented, together with a new formulation which makes use of Generalized Perturbation Theory (GPT) methods. As well known, sensitivity methods are fundamental in system risk analysis, since they allow to identify important components, so to assist the analyst in finding weaknesses in design and operation and in suggesting optimal modifications for system upgrade. The relationship between the GPT sensitivity expression and the Birnbaum importance is also given [fr

  14. Gold Nanorods as Surface-Enhanced Raman Spectroscopy Substrates for Rapid and Sensitive Analysis of Allura Red and Sunset Yellow in Beverages.

    Science.gov (United States)

    Ou, Yiming; Wang, Xiaohui; Lai, Keqiang; Huang, Yiqun; Rasco, Barbara A; Fan, Yuxia

    2018-03-21

    Synthetic colorants in food can be a potential threat to human health. In this study, surface-enhanced Raman spectroscopy (SERS) coupled with gold nanorods as substrates is proposed to analyze allura red and sunset yellow in beverages. The gold nanorods with different aspect ratios were synthesized, and their long-term stability, SERS activity, and the effect of the different salts on the SERS signal were investigated. The results demonstrate that gold nanorods have a satisfactory stability (stored up to 28 days). SERS coupled with gold nanorods exhibit stronger sensitivity. MgSO 4 was chosen to improve the SERS signal of sunset yellow, and no salts could enhance the SERS signal of allura red. The lowest concentration was 0.10 mg/L for both colorant standard solutions. The successful prediction results using SERS were much closer to those obtained by high-performance liquid chromatography for the sample in beverages. SERS combined with gold nanorods shows potential for analyzing food colorants and other food additives as a rapid, convenient, and sensitive method.

  15. Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model

    International Nuclear Information System (INIS)

    Otis, M.D.

    1983-01-01

    Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs

  16. Silver-coated Si nanograss as highly sensitive surface-enhanced Raman spectroscopy substrates

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Jing; Kuo, Huei Pei; Hu, Min; Li, Zhiyong; Williams, R.S. [Hewlett-Packard Laboratories, Information and Quantum Systems Laboratory, Palo Alto, CA (United States); Ou, Fung Suong [Hewlett-Packard Laboratories, Information and Quantum Systems Laboratory, Palo Alto, CA (United States); Rice University, Department of Applied Physics, Houston, TX (United States); Stickle, William F. [Hewlett-Packard Company, Advanced Diagnostic Lab, Corvallis, OR (United States)

    2009-09-15

    We created novel surface-enhanced Raman spectroscopy (SERS) substrates by metalization (Ag) of Si nanograss prepared by a Bosch process which involves deep reactive ion etching of single crystalline silicon. No template or lithography was needed for making the Si nanograss, thus providing a simple and inexpensive method to achieve highly sensitive large-area SERS substrates. The dependence of the SERS effect on the thickness of the metal deposition and on the surface morphology and topology of the substrate prior to metal deposition was studied in order to optimize the SERS signals. We observed that the Ag-coated Si nanograss can achieve uniform SERS enhancement over large area ({proportional_to}1 cm x 1 cm) with an average EF (enhancement factor) of 4.2 x 10{sup 8} for 4-mercaptophenol probe molecules. (orig.)

  17. Interactive Building Design Space Exploration Using Regionalized Sensitivity Analysis

    DEFF Research Database (Denmark)

    Østergård, Torben; Jensen, Rasmus Lund; Maagaard, Steffen

    2017-01-01

    simulation inputs are most important and which have negligible influence on the model output. Popular sensitivity methods include the Morris method, variance-based methods (e.g. Sobol’s), and regression methods (e.g. SRC). However, all these methods only address one output at a time, which makes it difficult...... in combination with the interactive parallel coordinate plot (PCP). The latter is an effective tool to explore stochastic simulations and to find high-performing building designs. The proposed methods help decision makers to focus their attention to the most important design parameters when exploring......Monte Carlo simulations combined with regionalized sensitivity analysis provide the means to explore a vast, multivariate design space in building design. Typically, sensitivity analysis shows how the variability of model output relates to the uncertainties in models inputs. This reveals which...

  18. Analysis of Static and Dynamic Properties of Micromirror with the Application of Response Surface Method

    Directory of Open Access Journals (Sweden)

    A Martowicz

    2016-09-01

    Full Text Available The paper presents the results of an application of response surface method to aid the analysis of variation of static and dynamic properties of micromirror. The multiphysics approach was taken into account to elaborate finite element model of electrostatically actuated microdevice and coupled analyses were carried out to yield the results. Used procedure of metamodel fitting is described and its quality is discussed. Elaborated approximations were used to perform the sensitivity analysis as well as to study the propagation of variation introduced by uncertain and control parameters. The input parameters deal with geometry, material properties and control voltage. As studied output characteristics there were chosen the resultant static vertical displacement of reflecting surfaces and the resonance frequency related to the first normal mode of vibration.

  19. Assessing parameter importance of the Common Land Model based on qualitative and quantitative sensitivity analysis

    Directory of Open Access Journals (Sweden)

    J. Li

    2013-08-01

    Full Text Available Proper specification of model parameters is critical to the performance of land surface models (LSMs. Due to high dimensionality and parameter interaction, estimating parameters of an LSM is a challenging task. Sensitivity analysis (SA is a tool that can screen out the most influential parameters on model outputs. In this study, we conducted parameter screening for six output fluxes for the Common Land Model: sensible heat, latent heat, upward longwave radiation, net radiation, soil temperature and soil moisture. A total of 40 adjustable parameters were considered. Five qualitative SA methods, including local, sum-of-trees, multivariate adaptive regression splines, delta test and Morris methods, were compared. The proper sampling design and sufficient sample size necessary to effectively screen out the sensitive parameters were examined. We found that there are 2–8 sensitive parameters, depending on the output type, and about 400 samples are adequate to reliably identify the most sensitive parameters. We also employed a revised Sobol' sensitivity method to quantify the importance of all parameters. The total effects of the parameters were used to assess the contribution of each parameter to the total variances of the model outputs. The results confirmed that global SA methods can generally identify the most sensitive parameters effectively, while local SA methods result in type I errors (i.e., sensitive parameters labeled as insensitive or type II errors (i.e., insensitive parameters labeled as sensitive. Finally, we evaluated and confirmed the screening results for their consistency with the physical interpretation of the model parameters.

  20. Sensitivity analysis of reactive ecological dynamics.

    Science.gov (United States)

    Verdy, Ariane; Caswell, Hal

    2008-08-01

    Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.

  1. The application of white radiation to residual stress analysis in the intermediate zone between surface and volume

    CERN Document Server

    Genzel, C; Wallis, B; Reimers, W

    2001-01-01

    Mechanical surface processing is known to give rise to complex residual stress fields in the near surface region of polycrystalline materials. Consequently, their analysis by means of non-destructive X-ray and neutron diffraction methods has become an important topic in materials science. However, there remains a gap with respect to the accessible near surface zone, which concerns a range between about 10 mu m and 1 mm, where the conventional X-ray methods are no longer and the neutron methods are not yet sensitive. In order to achieve the necessary penetration depth tau to perform residual stress analysis (RSA) in this region, advantageous use can be made of energy dispersive X-ray diffraction of synchrotron radiation (15-60 keV) in the reflection mode. Besides an example concerning the adaptation of methods applied so far in the angle dispersive RSA to the energy dispersive case, the concept of a new materials science beamline at BESSY II for residual stress and texture analysis is presented.

  2. The application of white radiation to residual stress analysis in the intermediate zone between surface and volume

    International Nuclear Information System (INIS)

    Genzel, Ch.; Stock, C.; Wallis, B.; Reimers, W.

    2001-01-01

    Mechanical surface processing is known to give rise to complex residual stress fields in the near surface region of polycrystalline materials. Consequently, their analysis by means of non-destructive X-ray and neutron diffraction methods has become an important topic in materials science. However, there remains a gap with respect to the accessible near surface zone, which concerns a range between about 10 μm and 1 mm, where the conventional X-ray methods are no longer and the neutron methods are not yet sensitive. In order to achieve the necessary penetration depth τ to perform residual stress analysis (RSA) in this region, advantageous use can be made of energy dispersive X-ray diffraction of synchrotron radiation (15-60 keV) in the reflection mode. Besides an example concerning the adaptation of methods applied so far in the angle dispersive RSA to the energy dispersive case, the concept of a new materials science beamline at BESSY II for residual stress and texture analysis is presented

  3. Surface Acoustic Wave Nebulisation Mass Spectrometry for the Fast and Highly Sensitive Characterisation of Synthetic Dyes in Textile Samples

    Science.gov (United States)

    Astefanei, Alina; van Bommel, Maarten; Corthals, Garry L.

    2017-10-01

    Surface acoustic wave nebulisation (SAWN) mass spectrometry (MS) is a method to generate gaseous ions compatible with direct MS of minute samples at femtomole sensitivity. To perform SAWN, acoustic waves are propagated through a LiNbO3 sampling chip, and are conducted to the liquid sample, which ultimately leads to the generation of a fine mist containing droplets of nanometre to micrometre diameter. Through fission and evaporation, the droplets undergo a phase change from liquid to gaseous analyte ions in a non-destructive manner. We have developed SAWN technology for the characterisation of organic colourants in textiles. It generates electrospray-ionisation-like ions in a non-destructive manner during ionisation, as can be observed by the unmodified chemical structure. The sample size is decreased by tenfold to 1000-fold when compared with currently used liquid chromatography-MS methods, with equal or better sensitivity. This work underscores SAWN-MS as an ideal tool for molecular analysis of art objects as it is non-destructive, is rapid, involves minimally invasive sampling and is more sensitive than current MS-based methods. [Figure not available: see fulltext.

  4. Applications of surface analysis and surface theory in tribology

    Science.gov (United States)

    Ferrante, John

    1989-01-01

    Tribology, the study of adhesion, friction and wear of materials, is a complex field which requires a knowledge of solid state physics, surface physics, chemistry, material science, and mechanical engineering. It has been dominated, however, by the more practical need to make equipment work. With the advent of surface analysis and advances in surface and solid-state theory, a new dimension has been added to the analysis of interactions at tribological interfaces. In this paper the applications of tribological studies and their limitations are presented. Examples from research at the NASA Lewis Research Center are given. Emphasis is on fundamental studies involving the effects of monolayer coverage and thick films on friction and wear. A summary of the current status of theoretical calculations of defect energetics is presented. In addition, some new theoretical techniques which enable simplified quantitative calculations of adhesion, fracture, and friction are discussed.

  5. Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs

    NARCIS (Netherlands)

    R.A. Zuidwijk (Rob)

    2005-01-01

    textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an

  6. Sensitivity Analysis Based on Markovian Integration by Parts Formula

    Directory of Open Access Journals (Sweden)

    Yongsheng Hang

    2017-10-01

    Full Text Available Sensitivity analysis is widely applied in financial risk management and engineering; it describes the variations brought by the changes of parameters. Since the integration by parts technique for Markov chains is well developed in recent years, in this paper we apply it for computation of sensitivity and show the closed-form expressions for two commonly-used time-continuous Markovian models. By comparison, we conclude that our approach outperforms the existing technique of computing sensitivity on Markovian models.

  7. Heat and Mass Transfer of Vacuum Cooling for Porous Foods-Parameter Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Zhijun Zhang

    2014-01-01

    Full Text Available Based on the theory of heat and mass transfer, a coupled model for the porous food vacuum cooling process is constructed. Sensitivity analyses of the process to food density, thermal conductivity, specific heat, latent heat of evaporation, diameter of pores, mass transfer coefficient, viscosity of gas, and porosity were examined. The simulation results show that the food density would affect the vacuum cooling process but not the vacuum cooling end temperature. The surface temperature of food was slightly affected and the core temperature is not affected by the changed thermal conductivity. The core temperature and surface temperature are affected by the changed specific heat. The core temperature and surface temperature are affected by the changed latent heat of evaporation. The core temperature is affected by the diameter of pores. But the surface temperature is not affected obviously. The core temperature and surface temperature are not affected by the changed gas viscosity. The parameter sensitivity of mass transfer coefficient is obvious. The core temperature and surface temperature are affected by the changed mass transfer coefficient. In all the simulations, the end temperature of core and surface is not affected. The vacuum cooling process of porous medium is a process controlled by outside process.

  8. Parameter uncertainty effects on variance-based sensitivity analysis

    International Nuclear Information System (INIS)

    Yu, W.; Harris, T.J.

    2009-01-01

    In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used

  9. Sensitivity analysis for decision-making using the MORE method-A Pareto approach

    International Nuclear Information System (INIS)

    Ravalico, Jakin K.; Maier, Holger R.; Dandy, Graeme C.

    2009-01-01

    Integrated Assessment Modelling (IAM) incorporates knowledge from different disciplines to provide an overarching assessment of the impact of different management decisions. The complex nature of these models, which often include non-linearities and feedback loops, requires special attention for sensitivity analysis. This is especially true when the models are used to form the basis of management decisions, where it is important to assess how sensitive the decisions being made are to changes in model parameters. This research proposes an extension to the Management Option Rank Equivalence (MORE) method of sensitivity analysis; a new method of sensitivity analysis developed specifically for use in IAM and decision-making. The extension proposes using a multi-objective Pareto optimal search to locate minimum combined parameter changes that result in a change in the preferred management option. It is demonstrated through a case study of the Namoi River, where results show that the extension to MORE is able to provide sensitivity information for individual parameters that takes into account simultaneous variations in all parameters. Furthermore, the increased sensitivities to individual parameters that are discovered when joint parameter variation is taken into account shows the importance of ensuring that any sensitivity analysis accounts for these changes.

  10. Boundary element analysis of the directional sensitivity of the concentric EMG electrode

    DEFF Research Database (Denmark)

    Henneberg, Kaj-åge; R., Plonsey

    1993-01-01

    on the intrinsic features linked to the geometry of the electrode. The results show that the cannula perturbs the potential distribution significantly. The core and the cannula electrodes measure potentials of the same order of magnitude in all of the pick-up range, except adjacent to the central wire, where...... as the mutual electrical influence between the electrode surfaces. A three-dimensional sensitivity function is defined from which information about the preferential direction of sensitivity, blind spots, phase changes, rate of attenuation, and range of pick-up radius can be derived. The study focuses...... the latter dominates the sensitivity function. The preferential directions of sensitivity are determined by.the amount of geometric offset between the individual sensitivity functions of the core and the cannula. The sensitivity function also reveals a complicated pattern of phase changes in the pick...

  11. Important radionuclides and their sensitivity for groundwater pathway of a hypothetical near-surface disposal facility

    Energy Technology Data Exchange (ETDEWEB)

    Park, J. W.; Chang, K.; Kim, C. L. [Nuclear Enviroment Technology Institute, Taejon (Korea, Republic of)

    2001-04-01

    A radiological safety assessment was performed for a hypothetical near-surface radioactive waste repository as a simple screening calculation to identify important nuclides and to provide insights on the data needs for a successful demonstration of compliance. Individual effective doses were calculated for a conservative groundwater pathway scenario considering well drilling near the site boundary. Sensitivity of resulting ingestion dose to input parameter values was also analyzed using Monte Carlo sampling. Considering peak dose rate and assessment timescale, C-14 and I-129 were identified as important nuclides and U-235 and U-238 as potentially important nuclides. For C-14, the does was most sensitive to Darcy velocity in aquifer. The distribution coefficient showed high degree of sensitivity for I-129 release.

  12. Important radionuclides and their sensitivity for groundwater pathway of a hypothetical near-surface disposal facility

    International Nuclear Information System (INIS)

    Park, J. W.; Chang, K.; Kim, C. L.

    2001-01-01

    A radiological safety assessment was performed for a hypothetical near-surface radioactive waste repository as a simple screening calculation to identify important nuclides and to provide insights on the data needs for a successful demonstration of compliance. Individual effective doses were calculated for a conservative groundwater pathway scenario considering well drilling near the site boundary. Sensitivity of resulting ingestion dose to input parameter values was also analyzed using Monte Carlo sampling. Considering peak dose rate and assessment timescale, C-14 and I-129 were identified as important nuclides and U-235 and U-238 as potentially important nuclides. For C-14, the does was most sensitive to Darcy velocity in aquifer. The distribution coefficient showed high degree of sensitivity for I-129 release

  13. Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model

    Science.gov (United States)

    Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance

    2014-01-01

    Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...

  14. Major and trace elements in mouse bone measured by surface and bulk sensitive methods

    International Nuclear Information System (INIS)

    Benkoe, I.; Geresi, K.; Ungvari, E.; Szabo, B.; Paripas, B.

    2011-01-01

    Complete text of publication follows. In the past years an increasing research interest turned to the accurate determination of the components of bone samples. These investigations focused on both the major and trace elements in the bone. Work in this field is strongly motivated because various major and trace element concentrations can be good indicators of several diseases. Number of studies also focused on the determination of the components both in the organic and inorganic parts of the bone separately, because they both have role during bone remodeling processes. Also important to note that bone can be one of the final destinations in the body where toxic elements are deposited. In this work we performed various surface and bulk sensitive analyses for the mouse bone samples to determine its major and trace element components. We have shown concentration profiles for various major and observable trace elements of the mouse bone. We found, in accordance with our expectation, that the mostly surface sensitive XPS technique is not suitable to determine the concentration of the trace elements in bone samples. It was also shown that XPS is a valuable tool not only in the determination of the chemical states of the major components of the bone powder but in the quantitative determination of their relative concentrations. Both the major and the trace elements of the bone samples are determined using PIXE and SNMS spectra. Although the information depths are very different for PIXE (a few tens of micrometer) and for XPS analysis (a few nanometers), our present PIXE result, using the bone sample in its original form for the concentration ratio between Ca and P is in excellent agreement with the XPS results using calcinated mouse bone powder. Discrepancy in Ca/Mg ratio (PIXE: 35.7 and XPS: 12.7) maybe due to many factors, which influence this ratio in bone samples. In the case of PIXE we studied native bones and determined composition of the compact bone at outside

  15. Sensitivity Analysis of Centralized Dynamic Cell Selection

    DEFF Research Database (Denmark)

    Lopez, Victor Fernandez; Alvarez, Beatriz Soret; Pedersen, Klaus I.

    2016-01-01

    and a suboptimal optimization algorithm that nearly achieves the performance of the optimal Hungarian assignment. Moreover, an exhaustive sensitivity analysis with different network and traffic configurations is carried out in order to understand what conditions are more appropriate for the use of the proposed...

  16. Applications of advances in nonlinear sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Werbos, P J

    1982-01-01

    The following paper summarizes the major properties and applications of a collection of algorithms involving differentiation and optimization at minimum cost. The areas of application include the sensitivity analysis of models, new work in statistical or econometric estimation, optimization, artificial intelligence and neuron modelling.

  17. Dry transfer of chemical-vapor-deposition-grown graphene onto liquid-sensitive surfaces for tunnel junction applications

    International Nuclear Information System (INIS)

    Feng, Ying; Chen, Ke

    2015-01-01

    We report a dry transfer method that can tranfer chemical vapor deposition (CVD) grown graphene onto liquid-sensitive surfaces. The graphene grown on copper (Cu) foil substrate was first transferred onto a freestanding 4 μm thick sputtered Cu film using the conventional wet transfer process, followed by a dry transfer process onto the target surface using a polydimethylsiloxane stamp. The dry-transferred graphene has similar properties to traditional wet-transferred graphene, characterized by scanning electron microscopy, atomic force microscopy, Raman spectroscopy, and electrical transport measurements. It has a sheet resistance of 1.6 ∼ 3.4 kΩ/□, hole density of (4.1 ∼ 5.3) × 10 12 cm −2 , and hole mobility of 460 ∼ 760 cm 2 V −1 s −1 without doping at room temperature. The results suggest that large-scale CVD-grown graphene can be transferred with good quality and without contaminating the target surface by any liquid. Mg/MgO/graphene tunnel junctions were fabricated using this transfer method. The junctions show good tunneling characteristics, which demonstrates the transfer technique can also be used to fabricate graphene devices on liquid-sensitive surfaces. (paper)

  18. Evaluating sensitivity of unsaturated soil properties

    International Nuclear Information System (INIS)

    Abdel-Rahman, R.O.; El-Kamash, A.M.; Nagy, M.E.; Khalill, M.Y.

    2005-01-01

    The assessment of near surface disposal performance relay on numerical models of groundwater flow and contaminant transport. These models use the unsaturated soil properties as input parameters, which are subject to uncertainty due to measurements errors and the spatial variability in the subsurface environment. To ascertain how much the output of the model will depend on the unsaturated soil properties the parametric sensitivity analysis is used. In this paper, a parametric sensitivity analysis of the Van Genuchten moisture retention characteristic (VGMRC) model will be presented and conducted to evaluate the relative importance of the unsaturated soil properties under different pressure head values that represent various dry and wet conditions. (author)

  19. ER-mitochondria contacts control surface glycan expression and sensitivity to killer lymphocytes in glioma stem-like cells.

    Science.gov (United States)

    Bassoy, Esen Yonca; Kasahara, Atsuko; Chiusolo, Valentina; Jacquemin, Guillaume; Boydell, Emma; Zamorano, Sebastian; Riccadonna, Cristina; Pellegatta, Serena; Hulo, Nicolas; Dutoit, Valérie; Derouazi, Madiha; Dietrich, Pierre Yves; Walker, Paul R; Martinvalet, Denis

    2017-06-01

    Glioblastoma is a highly heterogeneous aggressive primary brain tumor, with the glioma stem-like cells (GSC) being more sensitive to cytotoxic lymphocyte-mediated killing than glioma differentiated cells (GDC). However, the mechanism behind this higher sensitivity is unclear. Here, we found that the mitochondrial morphology of GSCs modulates the ER-mitochondria contacts that regulate the surface expression of sialylated glycans and their recognition by cytotoxic T lymphocytes and natural killer cells. GSCs displayed diminished ER-mitochondria contacts compared to GDCs. Forced ER-mitochondria contacts in GSCs increased their cell surface expression of sialylated glycans and reduced their susceptibility to cytotoxic lymphocytes. Therefore, mitochondrial morphology and dynamism dictate the ER-mitochondria contacts in order to regulate the surface expression of certain glycans and thus play a role in GSC recognition and elimination by immune effector cells. Targeting the mitochondrial morphology, dynamism, and contacts with the ER could be an innovative strategy to deplete the cancer stem cell compartment to successfully treat glioblastoma. © 2017 The Authors.

  20. A factorial assessment of the sensitivity of the BATS land-surface parameterization scheme. [BATS (Biosphere-Atmosphere Transfer Scheme)

    Energy Technology Data Exchange (ETDEWEB)

    Henderson-Sellers, A. (Macquarie Univ., North Ryde, New South Wales (Australia))

    1993-02-01

    Land-surface schemes developed for incorporation into global climate models include parameterizations that are not yet fully validated and depend upon the specification of a large (20-50) number of ecological and soil parameters, the values of which are not yet well known. There are two methods of investigating the sensitivity of a land-surface scheme to prescribed values: simple one-at-a-time changes or factorial experiments. Factorial experiments offer information about interactions between parameters and are thus a more powerful tool. Here the results of a suite of factorial experiments are reported. These are designed (i) to illustrate the usefulness of this methodology and (ii) to identify factors important to the performance of complex land-surface schemes. The Biosphere-Atmosphere Transfer Scheme (BATS) is used and its sensitivity is considered (a) to prescribed ecological and soil parameters and (b) to atmospheric forcing used in the off-line tests undertaken. Results indicate that the most important atmospheric forcings are mean monthly temperature and the interaction between mean monthly temperature and total monthly precipitation, although fractional cloudiness and other parameters are also important. The most important ecological parameters are vegetation roughness length, soil porosity, and a factor describing the sensitivity of the stomatal resistance of vegetation to the amount of photosynthetically active solar radiation and, to a lesser extent, soil and vegetation albedos. Two-factor interactions including vegetation roughness length are more important than many of the 23 specified single factors. The results of factorial sensitivity experiments such as these could form the basis for intercomparison of land-surface parameterization schemes and for field experiments and satellite-based observation programs aimed at improving evaluation of important parameters.

  1. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    Science.gov (United States)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  2. Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.

    Science.gov (United States)

    van Erp, Sara; Mulder, Joris; Oberski, Daniel L

    2017-11-27

    Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. An overview of the design and analysis of simulation experiments for sensitivity analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2005-01-01

    Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models. This review surveys 'classic' and 'modern' designs for experiments with simulation models. Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc. These designs

  4. Sensitivity analysis of a greedy heuristic for knapsack problems

    NARCIS (Netherlands)

    Ghosh, D; Chakravarti, N; Sierksma, G

    2006-01-01

    In this paper, we carry out parametric analysis as well as a tolerance limit based sensitivity analysis of a greedy heuristic for two knapsack problems-the 0-1 knapsack problem and the subset sum problem. We carry out the parametric analysis based on all problem parameters. In the tolerance limit

  5. Application of perturbation methods and sensitivity analysis to water hammer problems in hydraulic networks

    International Nuclear Information System (INIS)

    Balino, Jorge L.; Larreteguy, Axel E.; Andrade Lima, Fernando R.

    1995-01-01

    The differential method was applied to the sensitivity analysis for water hammer problems in hydraulic networks. Starting from the classical water hammer equations in a single-phase liquid with friction, the state vector comprising the piezometric head and the velocity was defined. Applying the differential method the adjoint operator, the adjoint equations with the general form of their boundary conditions, and the general form of the bilinear concomitant were calculated. The discretized adjoint equations and the corresponding boundary conditions were programmed and solved by using the so called method of characteristics. As an example, a constant-level tank connected through a pipe to a valve discharging to atmosphere was considered. The bilinear concomitant was calculated for this particular case. The corresponding sensitivity coefficients due to the variation of different parameters by using both the differential method and the response surface generated by the computer code WHAT were also calculated. The results obtained with these methods show excellent agreement. (author). 11 refs, 2 figs, 2 tabs

  6. Surface analysis of selected hydrophobic materials

    Science.gov (United States)

    Wisniewska, Sylwia Katarzyna

    This dissertation contains a series of studies on hydrophobic surfaces by various surface sensitive techniques such as contact angle measurements, Fourier transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), and atomic force microscopy (AFM). Hydrophobic surfaces have been classified as mineral surfaces, organic synthetic surfaces, or natural biological surfaces. As a model hydrophobic mineral surface, elemental sulfur has been selected. The sulfur surface has been characterized for selected allotropic forms of sulfur such as rhombic, monoclinic, plastic, and cyclohexasulfur. Additionally, dextrin adsorption at the sulfur surface was measured. The structure of a dextrin molecule showing hydrophobic sites has been presented to support the proposed hydrophobic bonding nature of dextrin adsorption at the sulfur surface. As a model organic hydrophobic surface, primary fatty amines such as dodecylamine, hexadecylamine, and octadecylamine were chosen. An increase of hydrophobicity, significant changes of infrared bands, and surface topographical changes with time were observed for each amine. Based on the results it was concluded that hydrocarbon chain rearrangement associated with recrystallization took place at the surface during contact with air. A barley straw surface was selected as a model of biological hydrophobic surfaces. The differences in the contact angles for various straw surfaces were explained by the presence of a wax layer. SEM images confirmed the heterogeneity and complexity of the wax crystal structure. AFM measurements provided additional structural details including a measure of surface roughness. Additionally, straw degradation as a result of conditioning in an aqueous environment was studied. Significant contact angle changes were observed as soon as one day after conditioning. FTIR studies showed a gradual wax layer removal due to straw surface decomposition. SEM and AFM images revealed topographical changes and biological

  7. Sensitivity analysis on various parameters for lattice analysis of DUPIC fuel with WIMS-AECL code

    Energy Technology Data Exchange (ETDEWEB)

    Roh, Gyu Hong; Choi, Hang Bok; Park, Jee Won [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    The code WIMS-AECL has been used for the lattice analysis of DUPIC fuel. The lattice parameters calculated by the code is sensitive to the choice of number of parameters, such as the number of tracking lines, number of condensed groups, mesh spacing in the moderator region, other parameters vital to the calculation of probabilities and burnup analysis. We have studied this sensitivity with respect to these parameters and recommend their proper values which are necessary for carrying out the lattice analysis of DUPIC fuel.

  8. Sensitivity analysis on various parameters for lattice analysis of DUPIC fuel with WIMS-AECL code

    Energy Technology Data Exchange (ETDEWEB)

    Roh, Gyu Hong; Choi, Hang Bok; Park, Jee Won [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    The code WIMS-AECL has been used for the lattice analysis of DUPIC fuel. The lattice parameters calculated by the code is sensitive to the choice of number of parameters, such as the number of tracking lines, number of condensed groups, mesh spacing in the moderator region, other parameters vital to the calculation of probabilities and burnup analysis. We have studied this sensitivity with respect to these parameters and recommend their proper values which are necessary for carrying out the lattice analysis of DUPIC fuel.

  9. Generalized tolerance sensitivity and DEA metric sensitivity

    OpenAIRE

    Neralić, Luka; E. Wendell, Richard

    2015-01-01

    This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA). Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.

  10. Generalized tolerance sensitivity and DEA metric sensitivity

    Directory of Open Access Journals (Sweden)

    Luka Neralić

    2015-03-01

    Full Text Available This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA. Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.

  11. Sensitivity of biogenic volatile organic compounds to land surface parameterizations and vegetation distributions in California

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Chun; Huang, Maoyi; Fast, Jerome D.; Berg, Larry K.; Qian, Yun; Guenther, Alex; Gu, Dasa; Shrivastava, Manish; Liu, Ying; Walters, Stacy; Pfister, Gabriele; Jin, Jiming; Shilling, John E.; Warneke, Carsten

    2016-01-01

    Current climate models still have large uncertainties in estimating biogenic trace gases, which can significantly affect atmospheric chemistry and secondary aerosol formation that ultimately influences air quality and aerosol radiative forcing. These uncertainties result from many factors, including uncertainties in land surface processes and specification of vegetation types, both of which can affect the simulated near-surface fluxes of biogenic volatile organic compounds (BVOCs). In this study, the latest version of Model of Emissions of Gases and Aerosols from Nature (MEGAN v2.1) is coupled within the land surface scheme CLM4 (Community Land Model version 4.0) in the Weather Research and Forecasting model with chemistry (WRF-Chem). In this implementation, MEGAN v2.1 shares a consistent vegetation map with CLM4 for estimating BVOC emissions. This is unlike MEGAN v2.0 in the public version of WRF-Chem that uses a stand-alone vegetation map that differs from what is used by land surface schemes. This improved modeling framework is used to investigate the impact of two land surface schemes, CLM4 and Noah, on BVOCs and examine the sensitivity of BVOCs to vegetation distributions in California. The measurements collected during the Carbonaceous Aerosol and Radiative Effects Study (CARES) and the California Nexus of Air Quality and Climate Experiment (CalNex) conducted in June of 2010 provided an opportunity to evaluate the simulated BVOCs. Sensitivity experiments show that land surface schemes do influence the simulated BVOCs, but the impact is much smaller than that of vegetation distributions. This study indicates that more effort is needed to obtain the most appropriate and accurate land cover data sets for climate and air quality models in terms of simulating BVOCs, oxidant chemistry and, consequently, secondary organic aerosol formation.

  12. Demonstration sensitivity analysis for RADTRAN III

    International Nuclear Information System (INIS)

    Neuhauser, K.S.; Reardon, P.C.

    1986-10-01

    A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves

  13. Sensitivity analysis of water consumption in an office building

    Science.gov (United States)

    Suchacek, Tomas; Tuhovcak, Ladislav; Rucka, Jan

    2018-02-01

    This article deals with sensitivity analysis of real water consumption in an office building. During a long-term real study, reducing of pressure in its water connection was simulated. A sensitivity analysis of uneven water demand was conducted during working time at various provided pressures and at various time step duration. Correlations between maximal coefficients of water demand variation during working time and provided pressure were suggested. The influence of provided pressure in the water connection on mean coefficients of water demand variation was pointed out, altogether for working hours of all days and separately for days with identical working hours.

  14. An Overview of the Design and Analysis of Simulation Experiments for Sensitivity Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2004-01-01

    Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models.This review surveys classic and modern designs for experiments with simulation models.Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc.These designs assume a

  15. A sensitivity analysis of hazardous waste disposal site climatic and soil design parameters using HELP3

    International Nuclear Information System (INIS)

    Adelman, D.D.; Stansbury, J.

    1997-01-01

    The Resource Conservation and Recovery Act (RCRA) Subtitle C, Comprehensive Environmental Response, Compensation, And Liability Act (CERCLA), and subsequent amendments have formed a comprehensive framework to deal with hazardous wastes on the national level. Key to this waste management is guidance on design (e.g., cover and bottom leachate control systems) of hazardous waste landfills. The objective of this research was to investigate the sensitivity of leachate volume at hazardous waste disposal sites to climatic, soil cover, and vegetative cover (Leaf Area Index) conditions. The computer model HELP3 which has the capability to simulate double bottom liner systems as called for in hazardous waste disposal sites was used in the analysis. HELP3 was used to model 54 combinations of climatic conditions, disposal site soil surface curve numbers, and leaf area index values to investigate how sensitive disposal site leachate volume was to these three variables. Results showed that leachate volume from the bottom double liner system was not sensitive to these parameters. However, the cover liner system leachate volume was quite sensitive to climatic conditions and less sensitive to Leaf Area Index and curve number values. Since humid locations had considerably more cover liner system leachate volume than and locations, different design standards may be appropriate for humid conditions than for and conditions

  16. Sensitivity analysis in the WWTP modelling community – new opportunities and applications

    DEFF Research Database (Denmark)

    Sin, Gürkan; Ruano, M.V.; Neumann, Marc B.

    2010-01-01

    design (BSM1 plant layout) using Standardized Regression Coefficients (SRC) and (ii) Applying sensitivity analysis to help fine-tuning a fuzzy controller for a BNPR plant using Morris Screening. The results obtained from each case study are then critically discussed in view of practical applications......A mainstream viewpoint on sensitivity analysis in the wastewater modelling community is that it is a first-order differential analysis of outputs with respect to the parameters – typically obtained by perturbing one parameter at a time with a small factor. An alternative viewpoint on sensitivity...

  17. Contribution to the sample mean plot for graphical and numerical sensitivity analysis

    International Nuclear Information System (INIS)

    Bolado-Lavin, R.; Castaings, W.; Tarantola, S.

    2009-01-01

    The contribution to the sample mean plot, originally proposed by Sinclair, is revived and further developed as practical tool for global sensitivity analysis. The potentials of this simple and versatile graphical tool are discussed. Beyond the qualitative assessment provided by this approach, a statistical test is proposed for sensitivity analysis. A case study that simulates the transport of radionuclides through the geosphere from an underground disposal vault containing nuclear waste is considered as a benchmark. The new approach is tested against a very efficient sensitivity analysis method based on state dependent parameter meta-modelling

  18. Personalization of models with many model parameters : an efficient sensitivity analysis approach

    NARCIS (Netherlands)

    Donders, W.P.; Huberts, W.; van de Vosse, F.N.; Delhaas, T.

    2015-01-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of

  19. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  20. Seismic analysis of steam generator and parameter sensitivity studies

    International Nuclear Information System (INIS)

    Qian Hao; Xu Dinggen; Yang Ren'an; Liang Xingyun

    2013-01-01

    Background: The steam generator (SG) serves as the primary means for removing the heat generated within the reactor core and is part of the reactor coolant system (RCS) pressure boundary. Purpose: Seismic analysis in required for SG, whose seismic category is Cat. I. Methods: The analysis model of SG is created with moisture separator assembly and tube bundle assembly herein. The seismic analysis is performed with RCS pipe and Reactor Pressure Vessel (RPV). Results: The seismic stress results of SG are obtained. In addition, parameter sensitivities of seismic analysis results are studied, such as the effect of another SG, support, anti-vibration bars (AVBs), and so on. Our results show that seismic results are sensitive to support and AVBs setting. Conclusions: The guidance and comments on these parameters are summarized for equipment design and analysis, which should be focused on in future new type NPP SG's research and design. (authors)

  1. An Application of Monte-Carlo-Based Sensitivity Analysis on the Overlap in Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    S. Razmyan

    2012-01-01

    Full Text Available Discriminant analysis (DA is used for the measurement of estimates of a discriminant function by minimizing their group misclassifications to predict group membership of newly sampled data. A major source of misclassification in DA is due to the overlapping of groups. The uncertainty in the input variables and model parameters needs to be properly characterized in decision making. This study combines DEA-DA with a sensitivity analysis approach to an assessment of the influence of banks’ variables on the overall variance in overlap in a DA in order to determine which variables are most significant. A Monte-Carlo-based sensitivity analysis is considered for computing the set of first-order sensitivity indices of the variables to estimate the contribution of each uncertain variable. The results show that the uncertainties in the loans granted and different deposit variables are more significant than uncertainties in other banks’ variables in decision making.

  2. Steady state likelihood ratio sensitivity analysis for stiff kinetic Monte Carlo simulations.

    Science.gov (United States)

    Núñez, M; Vlachos, D G

    2015-01-28

    Kinetic Monte Carlo simulation is an integral tool in the study of complex physical phenomena present in applications ranging from heterogeneous catalysis to biological systems to crystal growth and atmospheric sciences. Sensitivity analysis is useful for identifying important parameters and rate-determining steps, but the finite-difference application of sensitivity analysis is computationally demanding. Techniques based on the likelihood ratio method reduce the computational cost of sensitivity analysis by obtaining all gradient information in a single run. However, we show that disparity in time scales of microscopic events, which is ubiquitous in real systems, introduces drastic statistical noise into derivative estimates for parameters affecting the fast events. In this work, the steady-state likelihood ratio sensitivity analysis is extended to singularly perturbed systems by invoking partial equilibration for fast reactions, that is, by working on the fast and slow manifolds of the chemistry. Derivatives on each time scale are computed independently and combined to the desired sensitivity coefficients to considerably reduce the noise in derivative estimates for stiff systems. The approach is demonstrated in an analytically solvable linear system.

  3. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  4. Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics

    DEFF Research Database (Denmark)

    Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter

    2014-01-01

    We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, High-Dimensional Model Representation and Total Sensitivity Indices. The approach has a general applic...

  5. An Introduction to Sensitivity Analysis for Unobserved Confounding in Non-Experimental Prevention Research

    Science.gov (United States)

    Kuramoto, S. Janet; Stuart, Elizabeth A.

    2013-01-01

    Despite that randomization is the gold standard for estimating causal relationships, many questions in prevention science are left to be answered through non-experimental studies often because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most non-experimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example we examine the sensitivity of the association between maternal suicide and offspring’s risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall the association between maternal suicide and offspring’s hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for non-experimental studies. The implementation of sensitivity analysis can help increase confidence in results from non-experimental studies and better inform prevention researchers and policymakers regarding potential intervention targets. PMID:23408282

  6. What can we learn from global sensitivity analysis of biochemical systems?

    Science.gov (United States)

    Kent, Edward; Neumann, Stefan; Kummer, Ursula; Mendes, Pedro

    2013-01-01

    Most biological models of intermediate size, and probably all large models, need to cope with the fact that many of their parameter values are unknown. In addition, it may not be possible to identify these values unambiguously on the basis of experimental data. This poses the question how reliable predictions made using such models are. Sensitivity analysis is commonly used to measure the impact of each model parameter on its variables. However, the results of such analyses can be dependent on an exact set of parameter values due to nonlinearity. To mitigate this problem, global sensitivity analysis techniques are used to calculate parameter sensitivities in a wider parameter space. We applied global sensitivity analysis to a selection of five signalling and metabolic models, several of which incorporate experimentally well-determined parameters. Assuming these models represent physiological reality, we explored how the results could change under increasing amounts of parameter uncertainty. Our results show that parameter sensitivities calculated with the physiological parameter values are not necessarily the most frequently observed under random sampling, even in a small interval around the physiological values. Often multimodal distributions were observed. Unsurprisingly, the range of possible sensitivity coefficient values increased with the level of parameter uncertainty, though the amount of parameter uncertainty at which the pattern of control was able to change differed among the models analysed. We suggest that this level of uncertainty can be used as a global measure of model robustness. Finally a comparison of different global sensitivity analysis techniques shows that, if high-throughput computing resources are available, then random sampling may actually be the most suitable technique.

  7. UMTS Common Channel Sensitivity Analysis

    DEFF Research Database (Denmark)

    Pratas, Nuno; Rodrigues, António; Santos, Frederico

    2006-01-01

    and as such it is necessary that both channels be available across the cell radius. This requirement makes the choice of the transmission parameters a fundamental one. This paper presents a sensitivity analysis regarding the transmission parameters of two UMTS common channels: RACH and FACH. Optimization of these channels...... is performed and values for the key transmission parameters in both common channels are obtained. On RACH these parameters are the message to preamble offset, the initial SIR target and the preamble power step while on FACH it is the transmission power offset....

  8. Can feedback analysis be used to uncover the physical origin of climate sensitivity and efficacy differences?

    Science.gov (United States)

    Rieger, Vanessa S.; Dietmüller, Simone; Ponater, Michael

    2017-10-01

    Different strengths and types of radiative forcings cause variations in the climate sensitivities and efficacies. To relate these changes to their physical origin, this study tests whether a feedback analysis is a suitable approach. For this end, we apply the partial radiative perturbation method. Combining the forward and backward calculation turns out to be indispensable to ensure the additivity of feedbacks and to yield a closed forcing-feedback-balance at top of the atmosphere. For a set of CO2-forced simulations, the climate sensitivity changes with increasing forcing. The albedo, cloud and combined water vapour and lapse rate feedback are found to be responsible for the variations in the climate sensitivity. An O3-forced simulation (induced by enhanced NOx and CO surface emissions) causes a smaller efficacy than a CO2-forced simulation with a similar magnitude of forcing. We find that the Planck, albedo and most likely the cloud feedback are responsible for this effect. Reducing the radiative forcing impedes the statistical separability of feedbacks. We additionally discuss formal inconsistencies between the common ways of comparing climate sensitivities and feedbacks. Moreover, methodical recommendations for future work are given.

  9. A comprehensive sensitivity and uncertainty analysis of a milk drying process

    DEFF Research Database (Denmark)

    Ferrari, A.; Gutiérrez, S.; Sin, G.

    2015-01-01

    A simple steady state model of a milk drying process was built to help process understanding. It involves a spray chamber and also internal/external fluid beds. The model was subjected to a statistical analysis for quality assurance using sensitivity analysis (SA) of inputs/parameters, identifiab......A simple steady state model of a milk drying process was built to help process understanding. It involves a spray chamber and also internal/external fluid beds. The model was subjected to a statistical analysis for quality assurance using sensitivity analysis (SA) of inputs...... technique. SA results provide evidence towards over-parameterization in the model, and the chamber inlet dry bulb air temperature was the variable (input) with the highest sensitivity. IA results indicated that at most 4 parameters are identifiable: two from spray chamber and one from each fluid bed dryer...

  10. Repository surface design site layout analysis

    International Nuclear Information System (INIS)

    Montalvo, H.R.

    1998-01-01

    The purpose of this analysis is to establish the arrangement of the Yucca Mountain Repository surface facilities and features near the North Portal. The analysis updates and expands the North Portal area site layout concept presented in the ACD, including changes to reflect the resizing of the Waste Handling Building (WHB), Waste Treatment Building (WTB), Carrier Preparation Building (CPB), and site parking areas; the addition of the Carrier Washdown Buildings (CWBs); the elimination of the Cask Maintenance Facility (CMF); and the development of a concept for site grading and flood control. The analysis also establishes the layout of the surface features (e.g., roads and utilities) that connect all the repository surface areas (North Portal Operations Area, South Portal Development Operations Area, Emplacement Shaft Surface Operations Area, and Development Shaft Surface Operations Area) and locates an area for a potential lag storage facility. Details of South Portal and shaft layouts will be covered in separate design analyses. The objective of this analysis is to provide a suitable level of design for the Viability Assessment (VA). The analysis was revised to incorporate additional material developed since the issuance of Revision 01. This material includes safeguards and security input, utility system input (size and location of fire water tanks and pump houses, potable water and sanitary sewage rates, size of wastewater evaporation pond, size and location of the utility building, size of the bulk fuel storage tank, and size and location of other exterior process equipment), main electrical substation information, redundancy of water supply and storage for the fire support system, and additional information on the storm water retention pond

  11. Sensitivity of Greenland Ice Sheet surface mass balance to surface albedo parameterization: a study with a regional climate model

    OpenAIRE

    Angelen, J. H.; Lenaerts, J. T. M.; Lhermitte, S.; Fettweis, X.; Kuipers Munneke, P.; Broeke, M. R.; Meijgaard, E.; Smeets, C. J. P. P.

    2012-01-01

    We present a sensitivity study of the surface mass balance (SMB) of the Greenland Ice Sheet, as modeled using a regional atmospheric climate model, to various parameter settings in the albedo scheme. The snow albedo scheme uses grain size as a prognostic variable and further depends on cloud cover, solar zenith angle and black carbon concentration. For the control experiment the overestimation of absorbed shortwave radiation (+6%) at the K-transect (west Greenland) for the period 2004–2009 is...

  12. Sequential designs for sensitivity analysis of functional inputs in computer experiments

    International Nuclear Information System (INIS)

    Fruth, J.; Roustant, O.; Kuhnt, S.

    2015-01-01

    Computer experiments are nowadays commonly used to analyze industrial processes aiming at achieving a wanted outcome. Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on the response variable. In this work we focus on sensitivity analysis of a scalar-valued output of a time-consuming computer code depending on scalar and functional input parameters. We investigate a sequential methodology, based on piecewise constant functions and sequential bifurcation, which is both economical and fully interpretable. The new approach is applied to a sheet metal forming problem in three sequential steps, resulting in new insights into the behavior of the forming process over time. - Highlights: • Sensitivity analysis method for functional and scalar inputs is presented. • We focus on the discovery of most influential parts of the functional domain. • We investigate economical sequential methodology based on piecewise constant functions. • Normalized sensitivity indices are introduced and investigated theoretically. • Successful application to sheet metal forming on two functional inputs

  13. Deterministic sensitivity analysis of two-phase flow systems: forward and adjoint methods. Final report

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1984-07-01

    This report presents a self-contained mathematical formalism for deterministic sensitivity analysis of two-phase flow systems, a detailed application to sensitivity analysis of the homogeneous equilibrium model of two-phase flow, and a representative application to sensitivity analysis of a model (simulating pump-trip-type accidents in BWRs) where a transition between single phase and two phase occurs. The rigor and generality of this sensitivity analysis formalism stem from the use of Gateaux (G-) differentials. This report highlights the major aspects of deterministic (forward and adjoint) sensitivity analysis, including derivation of the forward sensitivity equations, derivation of sensitivity expressions in terms of adjoint functions, explicit construction of the adjoint system satisfied by these adjoint functions, determination of the characteristics of this adjoint system, and demonstration that these characteristics are the same as those of the original quasilinear two-phase flow equations. This proves that whenever the original two-phase flow problem is solvable, the adjoint system is also solvable and, in principle, the same numerical methods can be used to solve both the original and adjoint equations

  14. Sensitivity analysis of periodic errors in heterodyne interferometry

    International Nuclear Information System (INIS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-01-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors

  15. Sensitivity analysis of periodic errors in heterodyne interferometry

    Science.gov (United States)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  16. Integrated thermal and nonthermal treatment technology and subsystem cost sensitivity analysis

    International Nuclear Information System (INIS)

    Harvego, L.A.; Schafer, J.J.

    1997-02-01

    The U.S. Department of Energy's (DOE) Environmental Management Office of Science and Technology (EM-50) authorized studies on alternative systems for treating contact-handled DOE mixed low-level radioactive waste (MLLW). The on-going Integrated Thermal Treatment Systems' (ITTS) and the Integrated Nonthermal Treatment Systems' (INTS) studies satisfy this request. EM-50 further authorized supporting studies including this technology and subsystem cost sensitivity analysis. This analysis identifies areas where technology development could have the greatest impact on total life cycle system costs. These areas are determined by evaluating the sensitivity of system life cycle costs relative to changes in life cycle component or phase costs, subsystem costs, contingency allowance, facility capacity, operating life, and disposal costs. For all treatment systems, the most cost sensitive life cycle phase is the operations and maintenance phase and the most cost sensitive subsystem is the receiving and inspection/preparation subsystem. These conclusions were unchanged when the sensitivity analysis was repeated on a present value basis. Opportunity exists for technology development to reduce waste receiving and inspection/preparation costs by effectively minimizing labor costs, the major cost driver, within the maintenance and operations phase of the life cycle

  17. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Source apportionment and sensitivity analysis: two methodologies with two different purposes

    Science.gov (United States)

    Clappier, Alain; Belis, Claudio A.; Pernigotti, Denise; Thunis, Philippe

    2017-11-01

    This work reviews the existing methodologies for source apportionment and sensitivity analysis to identify key differences and stress their implicit limitations. The emphasis is laid on the differences between source impacts (sensitivity analysis) and contributions (source apportionment) obtained by using four different methodologies: brute-force top-down, brute-force bottom-up, tagged species and decoupled direct method (DDM). A simple theoretical example to compare these approaches is used highlighting differences and potential implications for policy. When the relationships between concentration and emissions are linear, impacts and contributions are equivalent concepts. In this case, source apportionment and sensitivity analysis may be used indifferently for both air quality planning purposes and quantifying source contributions. However, this study demonstrates that when the relationship between emissions and concentrations is nonlinear, sensitivity approaches are not suitable to retrieve source contributions and source apportionment methods are not appropriate to evaluate the impact of abatement strategies. A quantification of the potential nonlinearities should therefore be the first step prior to source apportionment or planning applications, to prevent any limitations in their use. When nonlinearity is mild, these limitations may, however, be acceptable in the context of the other uncertainties inherent to complex models. Moreover, when using sensitivity analysis for planning, it is important to note that, under nonlinear circumstances, the calculated impacts will only provide information for the exact conditions (e.g. emission reduction share) that are simulated.

  19. EV range sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ostafew, C. [Azure Dynamics Corp., Toronto, ON (Canada)

    2010-07-01

    This presentation included a sensitivity analysis of electric vehicle components on overall efficiency. The presentation provided an overview of drive cycles and discussed the major contributors to range in terms of rolling resistance; aerodynamic drag; motor efficiency; and vehicle mass. Drive cycles that were presented included: New York City Cycle (NYCC); urban dynamometer drive cycle; and US06. A summary of the findings were presented for each of the major contributors. Rolling resistance was found to have a balanced effect on each drive cycle and proportional to range. In terms of aerodynamic drive, there was a large effect on US06 range. A large effect was also found on NYCC range in terms of motor efficiency and vehicle mass. figs.

  20. Systemization of burnup sensitivity analysis code (2) (Contract research)

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2008-08-01

    Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant economic efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristic is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons: the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion

  1. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    Science.gov (United States)

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  2. Semianalytic Design Sensitivity Analysis of Nonlinear Structures With a Commercial Finite Element Package

    International Nuclear Information System (INIS)

    Lee, Tae Hee; Yoo, Jung Hun; Choi, Hyeong Cheol

    2002-01-01

    A finite element package is often used as a daily design tool for engineering designers in order to analyze and improve the design. The finite element analysis can provide the responses of a system for given design variables. Although finite element analysis can quite well provide the structural behaviors for given design variables, it cannot provide enough information to improve the design such as design sensitivity coefficients. Design sensitivity analysis is an essential step to predict the change in responses due to a change in design variables and to optimize a system with the aid of the gradient-based optimization techniques. To develop a numerical method of design sensitivity analysis, analytical derivatives that are based on analytical differentiation of the continuous or discrete finite element equations are effective but analytical derivatives are difficult because of the lack of internal information of the commercial finite element package such as shape functions. Therefore, design sensitivity analysis outside of the finite element package is necessary for practical application in an industrial setting. In this paper, the semi-analytic method for design sensitivity analysis is used for the development of the design sensitivity module outside of a commercial finite element package of ANSYS. The direct differentiation method is employed to compute the design derivatives of the response and the pseudo-load for design sensitivity analysis is effectively evaluated by using the design variation of the related internal nodal forces. Especially, we suggest an effective method for stress and nonlinear design sensitivity analyses that is independent of the commercial finite element package is also discussed. Numerical examples are illustrated to show the accuracy and efficiency of the developed method and to provide insights for implementation of the suggested method into other commercial finite element packages

  3. SENSIT: a cross-section and design sensitivity and uncertainty analysis code. [In FORTRAN for CDC-7600, IBM 360

    Energy Technology Data Exchange (ETDEWEB)

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.

  4. Highly sensitive photoelectrochemical biosensor for kinase activity detection and inhibition based on the surface defect recognition and multiple signal amplification of metal-organic frameworks.

    Science.gov (United States)

    Wang, Zonghua; Yan, Zhiyong; Wang, Feng; Cai, Jibao; Guo, Lei; Su, Jiakun; Liu, Yang

    2017-11-15

    A turn-on photoelectrochemical (PEC) biosensor based on the surface defect recognition and multiple signal amplification of metal-organic frameworks (MOFs) was proposed for highly sensitive protein kinase activity analysis and inhibitor evaluation. In this strategy, based on the phosphorylation reaction in the presence of protein kinase A (PKA), the Zr-based metal-organic frameworks (UiO-66) accommodated with [Ru(bpy) 3 ] 2+ photoactive dyes in the pores were linked to the phosphorylated kemptide modified TiO 2 /ITO electrode through the chelation between the Zr 4+ defects on the surface of UiO-66 and the phosphate groups in kemptide. Under visible light irradiation, the excited electrons from [Ru(bpy) 3 ] 2+ adsorbed in the pores of UiO-66 injected into the TiO 2 conduction band to generate photocurrent, which could be utilized for protein kinase activities detection. The large surface area and high porosities of UiO-66 facilitated a large number of [Ru(bpy) 3 ] 2+ that increased the photocurrent significantly, and afforded a highly sensitive PEC analysis of kinase activity. The detection limit of the as-proposed PEC biosensor was 0.0049UmL -1 (S/N!=!3). The biosensor was also applied for quantitative kinase inhibitor evaluation and PKA activities detection in MCF-7 cell lysates. The developed visible-light PEC biosensor provides a simple detection procedure and a cost-effective manner for PKA activity assays, and shows great potential in clinical diagnosis and drug discoveries. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Contribution of the surface contamination of uranium-materials on the quantitative analysis results by electron probe microbeam analysis

    International Nuclear Information System (INIS)

    Bonino, O.; Fournier, C.; Fucili, C.; Dugne, O.; Merlet, C.

    2000-01-01

    The analytical testing of uranium materials is necessary for quality research and development in nuclear industry applications (enrichment, safety studies, fuel, etc). Electron Probe Microbeam Analysis Wavelength Dispersive Spectrometry (EPMA-WDS) is a dependable non-destructive analytical technology. The characteristic X-ray signal is measured to identify and quantify the sample components, and the analyzed volume is about one micron cube. The surface contamination of uranium materials modifies and contributes to the quantitative analysis results of EPMA-WDS. This contribution is not representative of the bulk. A thin oxidized layer appears in the first instants after preparation (burnishing, cleaning) as well as a carbon contamination layer, due to metallographic preparation and carbon cracking under the impact of the electron probe. Several analytical difficulties subsequently arise, including an overlapping line between the carbon Ka ray and the Uranium U NIVOVI ray. Sensitivity and accuracy of the quantification of light elements like carbon and oxygen are also reduced by the presence of uranium. The aim of this study was to improve the accuracy of quantitative analysis on uranium materials by EPMA-WDS by taking account of the contribution of surface contamination. The first part of this paper is devoted to the study of the contaminated surface of the uranium materials U, UFe 2 and U 6 Fe a few hours after preparation. These oxidation conditions are selected so as to reproduce the same contamination surfaces occurring in microprobe analytical conditions. Surface characterization techniques were SIMS and Auger spectroscopy. The contaminated surfaces are shown. They consist of successive layers: a carbon layer, an oxidized iron layer, followed by an iron depletion layer (only in UFe 2 and U 6 Fe), and a ternary oxide layer (U-Fe-O for UFe 2 et U 6 Fe and UO 2+x for uranium). The second part of the paper addresses the estimation of the errors in quantitative

  6. The effect of spatial micro-CT image resolution and surface complexity on the morphological 3D analysis of open porous structures

    Energy Technology Data Exchange (ETDEWEB)

    Pyka, Grzegorz, E-mail: gregory.pyka@mtm.kuleuven.be [Department of Metallurgy and Materials Engineering, KU Leuven, Kasteelpark Arenberg 44 – PB2450, B-3001 Leuven (Belgium); Kerckhofs, Greet [Department of Metallurgy and Materials Engineering, KU Leuven, Kasteelpark Arenberg 44 – PB2450, B-3001 Leuven (Belgium); Biomechanics Research Unit, Université de Liege, Chemin des Chevreuils 1 - BAT 52/3, B-4000 Liège (Belgium); Schrooten, Jan; Wevers, Martine [Department of Metallurgy and Materials Engineering, KU Leuven, Kasteelpark Arenberg 44 – PB2450, B-3001 Leuven (Belgium)

    2014-01-15

    In material science microfocus X-ray computed tomography (micro-CT) is one of the most popular non-destructive techniques to visualise and quantify the internal structure of materials in 3D. Despite constant system improvements, state-of-the-art micro-CT images can still hold several artefacts typical for X-ray CT imaging that hinder further image-based processing, structural and quantitative analysis. For example spatial resolution is crucial for an appropriate characterisation as the voxel size essentially influences the partial volume effect. However, defining the adequate image resolution is not a trivial aspect and understanding the correlation between scan parameters like voxel size and the structural properties is crucial for comprehensive material characterisation using micro-CT. Therefore, the objective of this study was to evaluate the influence of the spatial image resolution on the micro-CT based morphological analysis of three-dimensional (3D) open porous structures with a high surface complexity. In particular the correlation between the local surface properties and the accuracy of the micro-CT-based macro-morphology of 3D open porous Ti6Al4V structures produced by selective laser melting (SLM) was targeted and revealed for rough surfaces a strong dependence of the resulting structure characteristics on the scan resolution. Reducing the surface complexity by chemical etching decreased the sensitivity of the overall morphological analysis to the spatial image resolution and increased the detection limit. This study showed that scan settings and image processing parameters need to be customized to the material properties, morphological parameters under investigation and the desired final characteristics (in relation to the intended functional use). Customization of the scan resolution can increase the reliability of the micro-CT based analysis and at the same time reduce its operating costs. - Highlights: • We examine influence of the image resolution

  7. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  8. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1990-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems

  9. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1991-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab

  10. High sensitive detection of copper II ions using D-penicillamine-coated gold nanorods based on localized surface plasmon resonance

    Science.gov (United States)

    Hong, Yoochan; Jo, Seongjae; Park, Joohyung; Park, Jinsung; Yang, Jaemoon

    2018-05-01

    In this paper, we describe the development of a nanoplasmonic biosensor based on the localized surface plasmon resonance (LSPR) effect that enables a sensitive and selective recognition of copper II ions. First, we fabricated the nanoplasmonics as LSPR substrates using gold nanorods (GNR) and the nano-adsorption method. The LSPR sensitivity of the nanoplasmonics was evaluated using various solvents with different refractive indexes. Subsequently, D-penicillamine (DPA)—a chelating agent of copper II ions—was conjugated to the surface of the GNR. The limit of detection (LOD) for the DPA-conjugated nanoplasmonics was 100 pM. Furthermore, selectivity tests were conducted using various divalent cations, and sensitivity tests were conducted on the nanoplasmonics under blood-like environments. Finally, the developed nanoplasmonic biosensor based on GNR shows great potential for the effective recognition of copper II ions, even in human blood conditions.

  11. Sensitivity analysis in Gaussian Bayesian networks using a symbolic-numerical technique

    International Nuclear Information System (INIS)

    Castillo, Enrique; Kjaerulff, Uffe

    2003-01-01

    The paper discusses the problem of sensitivity analysis in Gaussian Bayesian networks. The algebraic structure of the conditional means and variances, as rational functions involving linear and quadratic functions of the parameters, are used to simplify the sensitivity analysis. In particular the probabilities of conditional variables exceeding given values and related probabilities are analyzed. Two examples of application are used to illustrate all the concepts and methods

  12. Deterministic Local Sensitivity Analysis of Augmented Systems - I: Theory

    International Nuclear Information System (INIS)

    Cacuci, Dan G.; Ionescu-Bujor, Mihaela

    2005-01-01

    This work provides the theoretical foundation for the modular implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for large-scale simulation systems. The implementation of the ASAP commences with a selected code module and then proceeds by augmenting the size of the adjoint sensitivity system, module by module, until the entire system is completed. Notably, the adjoint sensitivity system for the augmented system can often be solved by using the same numerical methods used for solving the original, nonaugmented adjoint system, particularly when the matrix representation of the adjoint operator for the augmented system can be inverted by partitioning

  13. First order sensitivity analysis of flexible multibody systems using absolute nodal coordinate formulation

    International Nuclear Information System (INIS)

    Pi Ting; Zhang Yunqing; Chen Liping

    2012-01-01

    Design sensitivity analysis of flexible multibody systems is important in optimizing the performance of mechanical systems. The choice of coordinates to describe the motion of multibody systems has a great influence on the efficiency and accuracy of both the dynamic and sensitivity analysis. In the flexible multibody system dynamics, both the floating frame of reference formulation (FFRF) and absolute nodal coordinate formulation (ANCF) are frequently utilized to describe flexibility, however, only the former has been used in design sensitivity analysis. In this article, ANCF, which has been recently developed and focuses on modeling of beams and plates in large deformation problems, is extended into design sensitivity analysis of flexible multibody systems. The Motion equations of a constrained flexible multibody system are expressed as a set of index-3 differential algebraic equations (DAEs), in which the element elastic forces are defined using nonlinear strain-displacement relations. Both the direct differentiation method and adjoint variable method are performed to do sensitivity analysis and the related dynamic and sensitivity equations are integrated with HHT-I3 algorithm. In this paper, a new method to deduce system sensitivity equations is proposed. With this approach, the system sensitivity equations are constructed by assembling the element sensitivity equations with the help of invariant matrices, which results in the advantage that the complex symbolic differentiation of the dynamic equations is avoided when the flexible multibody system model is changed. Besides that, the dynamic and sensitivity equations formed with the proposed method can be efficiently integrated using HHT-I3 method, which makes the efficiency of the direct differentiation method comparable to that of the adjoint variable method when the number of design variables is not extremely large. All these improvements greatly enhance the application value of the direct differentiation

  14. Parametric uncertainty and global sensitivity analysis in a model of the carotid bifurcation: Identification and ranking of most sensitive model parameters.

    Science.gov (United States)

    Gul, R; Bernhard, S

    2015-11-01

    In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Assessing modeled Greenland surface mass balance in the GISS Model E2 and its sensitivity to surface albedo

    Science.gov (United States)

    Alexander, Patrick; LeGrande, Allegra N.; Koenig, Lora S.; Tedesco, Marco; Moustafa, Samiah E.; Ivanoff, Alvaro; Fischer, Robert P.; Fettweis, Xavier

    2016-04-01

    The surface mass balance (SMB) of the Greenland Ice Sheet (GrIS) plays an important role in global sea level change. Regional Climate Models (RCMs) such as the Modèle Atmosphérique Régionale (MAR) have been employed at high spatial resolution with relatively complex physics to simulate ice sheet SMB. Global climate models (GCMs) incorporate less sophisticated physical schemes and provide outputs at a lower spatial resolution, but have the advantage of modeling the interaction between different components of the earth's oceans, climate, and land surface at a global scale. Improving the ability of GCMs to represent ice sheet SMB is important for making predictions of future changes in global sea level. With the ultimate goal of improving SMB simulated by the Goddard Institute for Space Studies (GISS) Model E2 GCM, we compare simulated GrIS SMB against the outputs of the MAR model and radar-derived estimates of snow accumulation. In order to reproduce present-day climate variability in the Model E2 simulation, winds are constrained to match the reanalysis datasets used to force MAR at the lateral boundaries. We conduct a preliminary assessment of the sensitivity of the simulated Model E2 SMB to surface albedo, a parameter that is known to strongly influence SMB. Model E2 albedo is set to a fixed value of 0.8 over the entire ice sheet in the initial configuration of the model (control case). We adjust this fixed value in an ensemble of simulations over a range of 0.4 to 0.8 (roughly the range of observed summer GrIS albedo values) to examine the sensitivity of ice-sheet-wide SMB to albedo. We prescribe albedo from the Moderate Resolution Imaging Spectroradiometer (MODIS) MCD43A3 v6 to examine the impact of a more realistic spatial and temporal variations in albedo. An age-dependent snow albedo parameterization is applied, and its impact on SMB relative to observations and the RCM is assessed.

  16. Nuclear data sensitivity/uncertainty analysis for XT-ADS

    International Nuclear Information System (INIS)

    Sugawara, Takanori; Sarotto, Massimo; Stankovskiy, Alexey; Van den Eynde, Gert

    2011-01-01

    Highlights: → The sensitivity and uncertainty analyses were performed to comprehend the reliability of the XT-ADS neutronic design. → The uncertainties deduced from the covariance data for the XT-ADS criticality were 0.94%, 1.9% and 1.1% by the SCALE 44-group, TENDL-2009 and JENDL-3.3 data, respectively. → When the target accuracy of 0.3%Δk for the criticality was considered, the uncertainties did not satisfy it. → To achieve this accuracy, the uncertainties should be improved by experiments under an adequate condition. - Abstract: The XT-ADS, an accelerator-driven system for an experimental demonstration, has been investigated in the framework of IP EUROTRANS FP6 project. In this study, the sensitivity and uncertainty analyses were performed to comprehend the reliability of the XT-ADS neutronic design. For the sensitivity analysis, it was found that the sensitivity coefficients were significantly different by changing the geometry models and calculation codes. For the uncertainty analysis, it was confirmed that the uncertainties deduced from the covariance data varied significantly by changing them. The uncertainties deduced from the covariance data for the XT-ADS criticality were 0.94%, 1.9% and 1.1% by the SCALE 44-group, TENDL-2009 and JENDL-3.3 data, respectively. When the target accuracy of 0.3%Δk for the criticality was considered, the uncertainties did not satisfy it. To achieve this accuracy, the uncertainties should be improved by experiments under an adequate condition.

  17. Validation and sensitivity tests on improved parametrizations of a land surface process model (LSPM) in the Po Valley

    International Nuclear Information System (INIS)

    Cassardo, C.; Carena, E.; Longhetto, A.

    1998-01-01

    The Land Surface Process Model (LSPM) has been improved with respect to the 1. version of 1994. The modifications have involved the parametrizations of the radiation terms and of turbulent heat fluxes. A parametrization of runoff has also been developed, in order to close the hydrologic balance. This 2. version of LSPM has been validated against experimental data gathered at Mottarone (Verbania, Northern Italy) during a field experiment. The results of this validation show that this new version is able to apportionate the energy into sensible and latent heat fluxes. LSPM has also been submitted to a series of sensitivity tests in order to investigate the hydrological part of the model. The physical quantities selected in these sensitivity experiments have been the initial soil moisture content and the rainfall intensity. In each experiment, the model has been forced by using the observations carried out at the synoptic stations of San Pietro Capofiume (Po Valley, Italy). The observed characteristics of soil and vegetation (not involved in the sensitivity tests) have been used as initial and boundary conditions. The results of the simulation show that LSPM can reproduce well the energy, heat and water budgets and their behaviours with varying the selected parameters. A careful analysis of the LSPM output shows also the importance to identify the effective soil type

  18. Sensitivity analysis and optimization algorithms for 3D forging process design

    International Nuclear Information System (INIS)

    Do, T.T.; Fourment, L.; Laroussi, M.

    2004-01-01

    This paper presents several approaches for preform shape optimization in 3D forging. The process simulation is carried out using the FORGE3 registered finite element software, and the optimization problem regards the shape of initial axisymmetrical preforms. Several objective functions are considered, like the forging energy, the forging force or a surface defect criterion. Both deterministic and stochastic optimization algorithms are tested for 3D applications. The deterministic approach uses the sensitivity analysis that provides the gradient of the objective function. It is obtained by the adjoint-state method and semi-analytical differentiation. The study of stochastic approaches aims at comparing genetic algorithms and evolution strategies. Numerical results show the feasibility of such approaches, i.e. the achieving of satisfactory solutions within a limited number of 3D simulations, less than fifty. For a more industrial problem, the forging of a gear, encouraging optimization results are obtained

  19. Sensitivity Analysis of FEAST-Metal Fuel Performance Code: Initial Results

    International Nuclear Information System (INIS)

    Edelmann, Paul Guy; Williams, Brian J.; Unal, Cetin; Yacout, Abdellatif

    2012-01-01

    This memo documents the completion of the LANL milestone, M3FT-12LA0202041, describing methodologies and initial results using FEAST-Metal. The FEAST-Metal code calculations for this work are being conducted at LANL in support of on-going activities related to sensitivity analysis of fuel performance codes. The objective is to identify important macroscopic parameters of interest to modeling and simulation of metallic fuel performance. This report summarizes our preliminary results for the sensitivity analysis using 6 calibration datasets for metallic fuel developed at ANL for EBR-II experiments. Sensitivity ranking methodology was deployed to narrow down the selected parameters for the current study. There are approximately 84 calibration parameters in the FEAST-Metal code, of which 32 were ultimately used in Phase II of this study. Preliminary results of this sensitivity analysis led to the following ranking of FEAST models for future calibration and improvements: fuel conductivity, fission gas transport/release, fuel creep, and precipitation kinetics. More validation data is needed to validate calibrated parameter distributions for future uncertainty quantification studies with FEAST-Metal. Results of this study also served to point out some code deficiencies and possible errors, and these are being investigated in order to determine root causes and to improve upon the existing code models.

  20. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    Science.gov (United States)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  1. A survey of cross-section sensitivity analysis as applied to radiation shielding

    International Nuclear Information System (INIS)

    Goldstein, H.

    1977-01-01

    Cross section sensitivity studies revolve around finding the change in the value of an integral quantity, e.g. transmitted dose, for a given change in one of the cross sections. A review is given of the principal methodologies for obtaining the sensitivity profiles-principally direct calculations with altered cross sections, and linear perturbation theory. Some of the varied applications of cross section sensitivity analysis are described, including the practice, of questionable value, of adjusting input cross section data sets so as to provide agreement with integral experiments. Finally, a plea is made for using cross section sensitivity analysis as a powerful tool for analysing the transport mechanisms of particles in radiation shields and for constructing models of how cross section phenomena affect the transport. Cross section sensitivities in the shielding area have proved to be highly problem-dependent. Without the understanding afforded by such models, it is impossible to extrapolate the conclusions of cross section sensitivity analysis beyond the narrow limits of the specific situations examined in detail. Some of the elements that might be of use in developing the qualitative models are presented. (orig.) [de

  2. Sensitivity of surface radiation budget to clouds over the Asian ...

    Indian Academy of Sciences (India)

    National Climate Centre, India Meteorological Department, Pune 400 005. ... down on the earth surface–atmosphere system also as an imbalance between surface netcloud ... the clouds produce more cooling effect in short-wave band than the warming effect in long-wave .... In the present study, we use the analysis method.

  3. A framework for sensitivity analysis of decision trees.

    Science.gov (United States)

    Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław

    2018-01-01

    In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.

  4. Analytical sensitivity analysis of geometric errors in a three axis machine tool

    International Nuclear Information System (INIS)

    Park, Sung Ryung; Yang, Seung Han

    2012-01-01

    In this paper, an analytical method is used to perform a sensitivity analysis of geometric errors in a three axis machine tool. First, an error synthesis model is constructed for evaluating the position volumetric error due to the geometric errors, and then an output variable is defined, such as the magnitude of the position volumetric error. Next, the global sensitivity analysis is executed using an analytical method. Finally, the sensitivity indices are calculated using the quantitative values of the geometric errors

  5. Automated sensitivity analysis: New tools for modeling complex dynamic systems

    International Nuclear Information System (INIS)

    Pin, F.G.

    1987-01-01

    Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed

  6. Aggregation of Individual Sensing Units for Signal Accumulation: Conversion of Liquid-Phase Colorimetric Assay into Enhanced Surface-Tethered Electrochemical Analysis.

    Science.gov (United States)

    Wei, Tianxiang; Dong, Tingting; Wang, Zhaoyin; Bao, Jianchun; Tu, Wenwen; Dai, Zhihui

    2015-07-22

    A novel concept is proposed for converting liquid-phase colorimetric assay into enhanced surface-tethered electrochemical analysis, which is based on the analyte-induced formation of a network architecture of metal nanoparticles (MNs). In a proof-of-concept trial, thymine-functionalized silver nanoparticle (Ag-T) is designed as the sensing unit for Hg(2+) determination. Through a specific T-Hg(2+)-T coordination, the validation system based on functionalized sensing units not only can perform well in a colorimetric Hg(2+) assay, but also can be developed into a more sensitive and stable electrochemical Hg(2+) sensor. In electrochemical analysis, the simple principle of analyte-induced aggregation of MNs can be used as a dual signal amplification strategy for significantly improving the detection sensitivity. More importantly, those numerous and diverse colorimetric assays that rely on the target-induced aggregation of MNs can be augmented to satisfy the ambitious demands of sensitive analysis by converting them into electrochemical assays via this approach.

  7. Automatic differentiation for design sensitivity analysis of structural systems using multiple processors

    Science.gov (United States)

    Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi

    1994-01-01

    An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.

  8. Sensitivity of Pseudomonas fluorescens to gamma irradiation following surface inoculations on romaine lettuce and baby spinach

    Science.gov (United States)

    Irradiation of fresh fruits and vegetables is a post-harvest intervention measure often used to inactivate pathogenic food-borne microbes. We evaluated the sensitivity of Pseudomonas fluorescens strains (2-79, Q8R1, Q287) to gamma irradiation following surface inoculations on romaine lettuce and spi...

  9. Passivation of nanocrystalline TiO2 junctions by surface adsorbed phosphinate amphiphiles enhances the photovoltaic performance of dye sensitized solar cells

    KAUST Repository

    Wang, Mingkui

    2009-01-01

    We report a new class of molecular insulators that electronically passivate the surface of nanocrystalline titania films for high performance dye sensitized solar cells (DSC). Using electrical impedance measurements we demonstrate that co-adsorption of dineohexyl bis-(3,3-dimethyl-butyl)-phosphinic acid (DINHOP), along with the amphiphilic ruthenium sensitizer Z907Na increased substantially the power output of the cells mainly due to a retardation of interfacial recombination of photo-generated charge carriers. The use of phosphinates as anchoring groups opens up new avenues for modification of the surface by molecular insulators, sensitizers and other electro-active molecules to realize the desired optoelectronic performance of devices based on oxide junctions. © 2009 The Royal Society of Chemistry.

  10. A sensitive and quantitative biosensing method for the determination of γ-ray emitting radionuclides in surface water

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th.; Meer, A.J.G.M. van der

    1996-01-01

    A quantitative and sensitive biosensing method has been developed for the determination of γ-ray emitting radionuclides in surface water. The method is based on the concept that at equilibrium the specific radioactivity in the biosensor is equal to the specific radioactivity in water. The method consists of the measurement of both the radionuclide and the element in the biosensor and the determination of the element level in water. This three-way analysis eliminates problems such as unpredictable biosensor behaviour, effects of water elemental composition or further abiotic parameters: what remains is the generally high enrichment (bioaccumulation factor BCF) of elements and radionuclides in the biosensor material. Measurements were performed with floating water plants (Azolla filiculoides Lamk., Spirodela polyrhiza/Lemna sp.) and the fully submerged water plant Ceratophyllum demersum L., which were sampled from ditch water. Concentrations of elements and radionuclides were determined in both water and biosensor plants, using Neutron Activation Analysis (NAA), ICP-MS, and γ-ray spectrometry, respectively. For the latter, both 1 litre samples (Marinelli-geometry) and 1 cm 3 samples (well-type detectors) were applied in measurements. (author)

  11. Global sensitivity analysis applied to drying models for one or a population of granules

    DEFF Research Database (Denmark)

    Mortier, Severine Therese F. C.; Gernaey, Krist; Thomas, De Beer

    2014-01-01

    The development of mechanistic models for pharmaceutical processes is of increasing importance due to a noticeable shift toward continuous production in the industry. Sensitivity analysis is a powerful tool during the model building process. A global sensitivity analysis (GSA), exploring sensitiv......The development of mechanistic models for pharmaceutical processes is of increasing importance due to a noticeable shift toward continuous production in the industry. Sensitivity analysis is a powerful tool during the model building process. A global sensitivity analysis (GSA), exploring...... sensitivity in a broad parameter space, is performed to detect the most sensitive factors in two models, that is, one for drying of a single granule and one for the drying of a population of granules [using population balance model (PBM)], which was extended by including the gas velocity as extra input...... compared to our earlier work. beta(2) was found to be the most important factor for the single particle model which is useful information when performing model calibration. For the PBM-model, the granule radius and gas temperature were found to be most sensitive. The former indicates that granulator...

  12. Inverse analysis of inner surface temperature history from outer surface temperature measurement of a pipe

    International Nuclear Information System (INIS)

    Kubo, S; Ioka, S; Onchi, S; Matsumoto, Y

    2010-01-01

    When slug flow runs through a pipe, nonuniform and time-varying thermal stresses develop and there is a possibility that thermal fatigue occurs. Therefore it is necessary to know the temperature distributions and the stress distributions in the pipe for the integrity assessment of the pipe. It is, however, difficult to measure the inner surface temperature directly. Therefore establishment of the estimation method of the temperature history on inner surface of pipe is needed. As a basic study on the estimation method of the temperature history on the inner surface of a pipe with slug flow, this paper presents an estimation method of the temperature on the inner surface of a plate from the temperature on the outer surface. The relationship between the temperature history on the outer surface and the inner surface is obtained analytically. Using the results of the mathematical analysis, the inverse analysis method of the inner surface temperature history estimation from the outer surface temperature history is proposed. It is found that the inner surface temperature history can be estimated from the outer surface temperature history by applying the inverse analysis method, even when it is expressed by the multiple frequency components.

  13. Analysis of Pseudomonas aeruginosa cell envelope proteome by capture of surface-exposed proteins on activated magnetic nanoparticles.

    Directory of Open Access Journals (Sweden)

    Davide Vecchietti

    Full Text Available We report on specific magneto-capturing followed by Multidimensional Protein Identification Technology (MudPIT for the analysis of surface-exposed proteins of intact cells of the bacterial opportunistic pathogen Pseudomonas aeruginosa. The magneto-separation of cell envelope fragments from the soluble cytoplasmic fraction allowed the MudPIT identification of the captured and neighboring proteins. Remarkably, we identified 63 proteins captured directly by nanoparticles and 67 proteins embedded in the cell envelope fragments. For a high number of proteins, our analysis strongly indicates either surface exposure or localization in an envelope district. The localization of most identified proteins was only predicted or totally unknown. This novel approach greatly improves the sensitivity and specificity of the previous methods, such as surface shaving with proteases that was also tested on P. aeruginosa. The magneto-capture procedure is simple, safe, and rapid, and appears to be well-suited for envelope studies in highly pathogenic bacteria.

  14. Analysis of Pseudomonas aeruginosa Cell Envelope Proteome by Capture of Surface-Exposed Proteins on Activated Magnetic Nanoparticles

    Science.gov (United States)

    Vecchietti, Davide; Di Silvestre, Dario; Miriani, Matteo; Bonomi, Francesco; Marengo, Mauro; Bragonzi, Alessandra; Cova, Lara; Franceschi, Eleonora; Mauri, Pierluigi; Bertoni, Giovanni

    2012-01-01

    We report on specific magneto-capturing followed by Multidimensional Protein Identification Technology (MudPIT) for the analysis of surface-exposed proteins of intact cells of the bacterial opportunistic pathogen Pseudomonas aeruginosa. The magneto-separation of cell envelope fragments from the soluble cytoplasmic fraction allowed the MudPIT identification of the captured and neighboring proteins. Remarkably, we identified 63 proteins captured directly by nanoparticles and 67 proteins embedded in the cell envelope fragments. For a high number of proteins, our analysis strongly indicates either surface exposure or localization in an envelope district. The localization of most identified proteins was only predicted or totally unknown. This novel approach greatly improves the sensitivity and specificity of the previous methods, such as surface shaving with proteases that was also tested on P. aeruginosa. The magneto-capture procedure is simple, safe, and rapid, and appears to be well-suited for envelope studies in highly pathogenic bacteria. PMID:23226459

  15. Uncertainty and sensitivity analysis in a Probabilistic Safety Analysis level-1

    International Nuclear Information System (INIS)

    Nunez Mc Leod, Jorge E.; Rivera, Selva S.

    1996-01-01

    A methodology for sensitivity and uncertainty analysis, applicable to a Probabilistic Safety Assessment Level I has been presented. The work contents are: correct association of distributions to parameters, importance and qualification of expert opinions, generations of samples according to sample sizes, and study of the relationships among system variables and systems response. A series of statistical-mathematical techniques are recommended along the development of the analysis methodology, as well as different graphical visualization for the control of the study. (author)

  16. Interannual variability and sensitivity analysis of manure-borne bacteria transport from irrigated fields.

    Science.gov (United States)

    Martinez, Gonzalo; Pachepsky, Yakov; Shelton, Daniel; Guber, Andrey; Yakirevich, Alexander; Dughtry, Craig; Goodrich, David

    2014-05-01

    Manure application has been implicated in deterioration of microbial quality of surface water utilized in recreation, irrigation, aquaculture, and various household- and agriculture-related processes. The model KINEROS2/STWIR has been recently developed for rainfall- or irrigation event-based simulations of manure-borne overland bacteria transport. Information on uncertainty in the model parameter values is essential for running sensitivity analysis, creating synthetic datasets, developing risk assessment projects, etc. The objective of this work was to analyze data obtained in multiple years when the status of soil surface, soil structure, and weed cover created palpably different conditions for overland microorganism transport. Experiments were carried out at the Beltsville USDA OPE3 site, which is a part of the Lower Chesapeake Long-term Agricultural Research Network Site. Manure was applied at typical Maryland rates and the two-hour irrigation was applied immediately after manure application and one week later. Escherichia coli and thermotolerant coliform concentrations in runoff and the bacteria contents in manure and soil before and after application were measured across the application area of about 100 m x 50 m on the 40-point grid. Bacteria contents in manure varied up to six orders of magnitude. No spatial structure in these contents was found at the support and spacing of this work. Parameters sets were substantially different for thermotolerant coliforms and E. coli. Bacteria adsorption and straining parameters varied by one order of magnitude over three year trials. Variability of Manning roughness coefficient, saturated hydraulic conductivity, net capillary drive, relative saturation, and solute dispersivity was substantially smaller. The hypothesis of applicability of uniform distributions to simulate the empirical distributions of above parameters could not be rejected at the 0.05 significance level. The Bradford-Schijven model was used to simulate

  17. Comparison of global sensitivity analysis techniques and importance measures in PSA

    International Nuclear Information System (INIS)

    Borgonovo, E.; Apostolakis, G.E.; Tarantola, S.; Saltelli, A.

    2003-01-01

    This paper discusses application and results of global sensitivity analysis techniques to probabilistic safety assessment (PSA) models, and their comparison to importance measures. This comparison allows one to understand whether PSA elements that are important to the risk, as revealed by importance measures, are also important contributors to the model uncertainty, as revealed by global sensitivity analysis. We show that, due to epistemic dependence, uncertainty and global sensitivity analysis of PSA models must be performed at the parameter level. A difficulty arises, since standard codes produce the calculations at the basic event level. We discuss both the indirect comparison through importance measures computed for basic events, and the direct comparison performed using the differential importance measure and the Fussell-Vesely importance at the parameter level. Results are discussed for the large LLOCA sequence of the advanced test reactor PSA

  18. Probabilistic sensitivity analysis of optimised preventive maintenance strategies for deteriorating infrastructure assets

    International Nuclear Information System (INIS)

    Daneshkhah, A.; Stocks, N.G.; Jeffrey, P.

    2017-01-01

    Efficient life-cycle management of civil infrastructure systems under continuous deterioration can be improved by studying the sensitivity of optimised preventive maintenance decisions with respect to changes in model parameters. Sensitivity analysis in maintenance optimisation problems is important because if the calculation of the cost of preventive maintenance strategies is not sufficiently robust, the use of the maintenance model can generate optimised maintenances strategies that are not cost-effective. Probabilistic sensitivity analysis methods (particularly variance based ones), only partially respond to this issue and their use is limited to evaluating the extent to which uncertainty in each input contributes to the overall output's variance. These methods do not take account of the decision-making problem in a straightforward manner. To address this issue, we use the concept of the Expected Value of Perfect Information (EVPI) to perform decision-informed sensitivity analysis: to identify the key parameters of the problem and quantify the value of learning about certain aspects of the life-cycle management of civil infrastructure system. This approach allows us to quantify the benefits of the maintenance strategies in terms of expected costs and in the light of accumulated information about the model parameters and aspects of the system, such as the ageing process. We use a Gamma process model to represent the uncertainty associated with asset deterioration, illustrating the use of EVPI to perform sensitivity analysis on the optimisation problem for age-based and condition-based preventive maintenance strategies. The evaluation of EVPI indices is computationally demanding and Markov Chain Monte Carlo techniques would not be helpful. To overcome this computational difficulty, we approximate the EVPI indices using Gaussian process emulators. The implications of the worked numerical examples discussed in the context of analytical efficiency and organisational

  19. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    International Nuclear Information System (INIS)

    Lamboni, Matieyendou; Monod, Herve; Makowski, David

    2011-01-01

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006 ) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  20. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    Energy Technology Data Exchange (ETDEWEB)

    Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)

    2011-04-15

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  1. An investigation of the sensitivity of a land surface model to climate change using a reduced form model

    Energy Technology Data Exchange (ETDEWEB)

    Lynch, A.H.; McIlwaine, S. [PAOS/CIRES, Univ. of Colorado, Boulder, CO (United States); Beringer, J. [Inst. of Arctic Biology, Univ. of Alaska, Fairbanks (United States); Bonan, G.B. [National Center for Atmospheric Research, Boulder, CO (United States)

    2001-05-01

    In an illustration of a model evaluation methodology, a multivariate reduced form model is developed to evaluate the sensitivity of a land surface model to changes in atmospheric forcing. The reduced form model is constructed in terms of a set of ten integrative response metrics, including the timing of spring snow melt, sensible and latent heat fluxes in summer, and soil temperature. The responses are evaluated as a function of a selected set of six atmospheric forcing perturbations which are varied simultaneously, and hence each may be thought of as a six-dimensional response surface. The sensitivities of the land surface model are interdependent and in some cases illustrate a physically plausible feedback process. The important predictors of land surface response in a changing climate are the atmospheric temperature and downwelling longwave radiation. Scenarios characterized by warming and drying produce a large relative response compared to warm, moist scenarios. The insensitivity of the model to increases in precipitation and atmospheric humidity is expected to change in applications to coupled models, since these parameters are also strongly implicated, through the representation of clouds, in the simulation of both longwave and shortwave radiation. (orig.)

  2. Application of Sensitivity Analysis in Design of Sustainable Buildings

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik; Rasmussen, Henrik

    2009-01-01

    satisfies the design objectives and criteria. In the design of sustainable buildings, it is beneficial to identify the most important design parameters in order to more efficiently develop alternative design solutions or reach optimized design solutions. Sensitivity analyses make it possible to identify...... possible to influence the most important design parameters. A methodology of sensitivity analysis is presented and an application example is given for design of an office building in Denmark....

  3. Sensitivity analysis of network DEA illustrated in branch banking

    OpenAIRE

    N. Avkiran

    2010-01-01

    Users of data envelopment analysis (DEA) often presume efficiency estimates to be robust. While traditional DEA has been exposed to various sensitivity studies, network DEA (NDEA) has so far escaped similar scrutiny. Thus, there is a need to investigate the sensitivity of NDEA, further compounded by the recent attention it has been receiving in literature. NDEA captures the underlying performance information found in a firm?s interacting divisions or sub-processes that would otherwise remain ...

  4. Interdependencies of Arctic land surface processes: A uniquely sensitive environment

    Science.gov (United States)

    Bowling, L. C.

    2007-12-01

    The circumpolar arctic drainage basin is composed of several distinct ecoregions including steppe grassland and cropland, boreal forest and tundra. Land surface hydrology throughout this diverse region shares several unique features such as dramatic seasonal runoff differences controlled by snowmelt and ice break-up; the storage of significant portions of annual precipitation as snow and in lakes and wetlands; and the effects of ephemeral and permanently frozen soils. These arctic land processes are delicately balanced with the climate and are therefore important indicators of change. The litany of recently-detected changes in the Arctic includes changes in snow precipitation, trends and seasonal shifts in river discharge, increases and decreases in the extent of surface water, and warming soil temperatures. Although not unique to the arctic, increasing anthropogenic pressures represent an additional element of change in the form of resource extraction, fire threat and reservoir construction. The interdependence of the physical, biological and social systems mean that changes in primary indicators have large implications for land cover, animal populations and the regional carbon balance, all of which have the potential to feed back and induce further change. In fact, the complex relationships between the hydrological processes that make the Artic unique also render observed historical change difficult to interpret and predict, leading to conflicting explanations. For example, a decrease in snow accumulation may provide less insulation to the underlying soil resulting in greater frost development and increased spring runoff. Similarly, melting permafrost and ground ice may lead to ground subsidence and increased surface saturation and methane production, while more complete thaw may enhance drainage and result in drier soil conditions. The threshold nature of phase change around the freezing point makes the system especially sensitive to change. In addition, spatial

  5. Influence of surface states of CuInS{sub 2} quantum dots in quantum dots sensitized photo-electrodes

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Zhuoyin; Liu, Yueli [State Key Laboratory of Advanced Technology for Materials Synthesis and Processing, School of Materials Science and Engineering, Wuhan University of Technology, Wuhan 430070 (China); Wu, Lei [School of Electronic and Electrical, Wuhan Railway Vocational College of Technology, Wuhan 430205 (China); Zhao, Yinghan; Chen, Keqiang [State Key Laboratory of Advanced Technology for Materials Synthesis and Processing, School of Materials Science and Engineering, Wuhan University of Technology, Wuhan 430070 (China); Chen, Wen, E-mail: chenw@whut.edu.cn [State Key Laboratory of Advanced Technology for Materials Synthesis and Processing, School of Materials Science and Engineering, Wuhan University of Technology, Wuhan 430070 (China)

    2016-12-01

    Graphical abstract: J–V curves of different ligands capped CuInS{sub 2} QDs sensitized TiO{sub 2} photo-electrodes. - Highlights: • DDT, OLA, MPA, and S{sup 2−} ligand capped CuInS{sub 2} quantum dot sensitized photo-electrodes are prepared. • Surface states of quantum dots greatly influence the electrochemical performance of CuInS{sub 2} quantum dot sensitized photo-electrodes. • S{sup 2−} ligand enhances the UV–vis absorption and electron–hole separation property as well as the excellent charge transfer performance of the photo-electrodes. - Abstract: Surface states are significant factor for the enhancement of electrochemical performance in CuInS{sub 2} quantum dot sensitized photo-electrodes. DDT, OLA, MPA, and S{sup 2−} ligand capped CuInS{sub 2} quantum dot sensitized photo-electrodes are prepared by thermolysis, solvethermal and ligand-exchange processes, respectively, and their optical properties and photoelectrochemical properties are investigated. The S{sup 2−} ligand enhances the UV–vis absorption and electron–hole separation property as well as the excellent charge transfer performance of the photo-electrodes, which is attributed to the fact that the atomic S{sup 2−} ligand for the interfacial region of quantum dots may improve the electron transfer rate. These S{sup 2−}-capped CuInS{sub 2} quantum dot sensitized photo-electrodes exhibit the excellent photoelectrochemical efficiency and IPCE peak value, which is higher than that of the samples with DDT, OLA and MPA ligands.

  6. Sensitivity Analysis of Deviation Source for Fast Assembly Precision Optimization

    Directory of Open Access Journals (Sweden)

    Jianjun Tang

    2014-01-01

    Full Text Available Assembly precision optimization of complex product has a huge benefit in improving the quality of our products. Due to the impact of a variety of deviation source coupling phenomena, the goal of assembly precision optimization is difficult to be confirmed accurately. In order to achieve optimization of assembly precision accurately and rapidly, sensitivity analysis of deviation source is proposed. First, deviation source sensitivity is defined as the ratio of assembly dimension variation and deviation source dimension variation. Second, according to assembly constraint relations, assembly sequences and locating, deviation transmission paths are established by locating the joints between the adjacent parts, and establishing each part’s datum reference frame. Third, assembly multidimensional vector loops are created using deviation transmission paths, and the corresponding scalar equations of each dimension are established. Then, assembly deviation source sensitivity is calculated by using a first-order Taylor expansion and matrix transformation method. Finally, taking assembly precision optimization of wing flap rocker as an example, the effectiveness and efficiency of the deviation source sensitivity analysis method are verified.

  7. Global sensitivity analysis using a Gaussian Radial Basis Function metamodel

    International Nuclear Information System (INIS)

    Wu, Zeping; Wang, Donghui; Okolo N, Patrick; Hu, Fan; Zhang, Weihua

    2016-01-01

    Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on response variables. Amongst the wide range of documented studies on sensitivity measures and analysis, Sobol' indices have received greater portion of attention due to the fact that they can provide accurate information for most models. In this paper, a novel analytical expression to compute the Sobol' indices is derived by introducing a method which uses the Gaussian Radial Basis Function to build metamodels of computationally expensive computer codes. Performance of the proposed method is validated against various analytical functions and also a structural simulation scenario. Results demonstrate that the proposed method is an efficient approach, requiring a computational cost of one to two orders of magnitude less when compared to the traditional Quasi Monte Carlo-based evaluation of Sobol' indices. - Highlights: • RBF based sensitivity analysis method is proposed. • Sobol' decomposition of Gaussian RBF metamodel is obtained. • Sobol' indices of Gaussian RBF metamodel are derived based on the decomposition. • The efficiency of proposed method is validated by some numerical examples.

  8. Sensitivity Analysis of Structures by Virtual Distortion Method

    DEFF Research Database (Denmark)

    Gierlinski, J.T.; Holnicki-Szulc, J.; Sørensen, John Dalsgaard

    1991-01-01

    are used in structural optimization, see Haftka [4]. The recently developed Virtual Distortion Method (VDM) is a numerical technique which offers an efficient approach to calculation of the sensitivity derivatives. This method has been orginally applied to structural remodelling and collapse analysis, see...

  9. Fundamental Factors Impacting the Stability of Phosphonate-Derivatized Ruthenium Polypyridyl Sensitizers Adsorbed on Metal Oxide Surfaces.

    Science.gov (United States)

    Raber, McKenzie; Brady, Matthew David; Troian-Gautier, Ludovic; Dickenson, John; Marquard, Seth L; Hyde, Jacob; Lopez, Santiago; Meyer, Gerald J; Meyer, Thomas J; Harrison, Daniel P

    2018-06-08

    A series of 18 ruthenium(II) polypyridyl complexes were synthesized and evaluated under electrochemically oxidative conditions, which generates the Ru(III) oxidation state and mimics the harsh conditions experienced during the kinetically-limited regime that can occur in dye-sensitized solar cells (DSSCs) and dye-sensitized photoelectrosynthesis cells (DSPECs), to further develop fundamental insights into the factors governing molecular sensitizer surface stability in aqueous 0.1 M HClO4 (aq). Both desorption and oxidatively induced ligand substitution were observed on planar fluorine doped tin oxide, FTO, electrodes, with a dependence on the E1/2 Ru(III/II) redox potential dictating the comparative ratios of the processes. Complexes such as RuP4OMe (E1/2 = 0.91 vs Ag/AgCl) displayed virtually only desorption, while complexes such as RuPbpz (E1/2 > 1.62 V vs Ag/AgCl) displayed only chemical decomposition. Comparing isomers of 4,4'- and 5,5-disubstituted-2,2'-bipyridine ancillary polypyridyl ligands, a dramatic increase in the rate of desorption of the Ru(III) complexes was observed for the 5,5'-ligands. Nanoscopic indium doped tin oxide thin films, nanoITO, were also sensitized and analyzed with cyclic voltammetry, UV-Vis absorption spectroscopy, and XPS, allowing for further distinction of desorption versus ligand substitution processes. Desorption loss to bulk solution associated with the planar surface of FTO is essentially non-existent on nanoITO, where both desorption and ligand substitution are shut down with RuP4OMe. These results revealed that minimizing time spent in the oxidized form, incorporating electron donating groups, maximizing hydrophobicity, and minimizing molecular bulk near the adsorbed ligand are critical to optimizing the performance of ruthenium(II) polypyridyl complexes in dye-sensitized solar cell devices.

  10. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    Science.gov (United States)

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  11. Applications of ion scattering in surface analysis

    International Nuclear Information System (INIS)

    Armour, D.G.

    1981-01-01

    The study of ion scattering from surfaces has made an increasingly important contribution both to the development of highly surface specific analysis techniques and to the understanding of the atomic collision processes associated with ion bombardment of solid surfaces. From an analysis point of view, by appropriate choice of parameters such as ion energy and species, scattering geometry and target temperature, it is possible to study not only the composition of the surface layer but also the detailed atomic arrangement. The ion scattering technique is thus particularly useful for the study of surface compositional and structural changes caused by adsorption, thermal annealing or ion bombardment treatments of simple or composite materials. Ion bombardment induced desorption, damage or atomic mixing can also be effectively studied using scattering techniques. By reviewing the application of the technique to a variety of these technologically important surface investigations, it is possible to illustrate the way in which ion scattering has developed as the understanding of the underlying physics has improved. (author)

  12. Application of Monte Carlo filtering method in regional sensitivity analysis of AASHTOWare Pavement ME design

    Directory of Open Access Journals (Sweden)

    Zhong Wu

    2017-04-01

    Full Text Available Since AASHTO released the Mechanistic-Empirical Pavement Design Guide (MEPDG for public review in 2004, many highway research agencies have performed sensitivity analyses using the prototype MEPDG design software. The information provided by the sensitivity analysis is essential for design engineers to better understand the MEPDG design models and to identify important input parameters for pavement design. In literature, different studies have been carried out based on either local or global sensitivity analysis methods, and sensitivity indices have been proposed for ranking the importance of the input parameters. In this paper, a regional sensitivity analysis method, Monte Carlo filtering (MCF, is presented. The MCF method maintains many advantages of the global sensitivity analysis, while focusing on the regional sensitivity of the MEPDG model near the design criteria rather than the entire problem domain. It is shown that the information obtained from the MCF method is more helpful and accurate in guiding design engineers in pavement design practices. To demonstrate the proposed regional sensitivity method, a typical three-layer flexible pavement structure was analyzed at input level 3. A detailed procedure to generate Monte Carlo runs using the AASHTOWare Pavement ME Design software was provided. The results in the example show that the sensitivity ranking of the input parameters in this study reasonably matches with that in a previous study under a global sensitivity analysis. Based on the analysis results, the strengths, practical issues, and applications of the MCF method were further discussed.

  13. Sensitivity analysis practices: Strategies for model-based inference

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)

    2006-10-15

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.

  14. Sensitivity analysis practices: Strategies for model-based inference

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Ratto, Marco; Tarantola, Stefano; Campolongo, Francesca

    2006-01-01

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA

  15. Simple Sensitivity Analysis for Orion GNC

    Science.gov (United States)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  16. Global sensitivity analysis using polynomial chaos expansions

    International Nuclear Information System (INIS)

    Sudret, Bruno

    2008-01-01

    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices

  17. Global sensitivity analysis using polynomial chaos expansions

    Energy Technology Data Exchange (ETDEWEB)

    Sudret, Bruno [Electricite de France, R and D Division, Site des Renardieres, F 77818 Moret-sur-Loing Cedex (France)], E-mail: bruno.sudret@edf.fr

    2008-07-15

    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices.

  18. Sensitization trajectories in childhood revealed by using a cluster analysis

    DEFF Research Database (Denmark)

    Schoos, Ann-Marie M.; Chawes, Bo L.; Melen, Erik

    2017-01-01

    Prospective Studies on Asthma in Childhood 2000 (COPSAC2000) birth cohort with specific IgE against 13 common food and inhalant allergens at the ages of ½, 1½, 4, and 6 years. An unsupervised cluster analysis for 3-dimensional data (nonnegative sparse parallel factor analysis) was used to extract latent......BACKGROUND: Assessment of sensitization at a single time point during childhood provides limited clinical information. We hypothesized that sensitization develops as specific patterns with respect to age at debut, development over time, and involved allergens and that such patterns might be more...... biologically and clinically relevant. OBJECTIVE: We sought to explore latent patterns of sensitization during the first 6 years of life and investigate whether such patterns associate with the development of asthma, rhinitis, and eczema. METHODS: We investigated 398 children from the at-risk Copenhagen...

  19. SURFACE TEXTURE ANALYSIS FOR FUNCTIONALITY CONTROL

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Andreasen, Jan Lasson; Tosello, Guido

    This document is used in connection with three exercises of 3 hours duration as a part of the course VISION ONLINE – One week course on Precision & Nanometrology. The exercises concern surface texture analysis for functionality control, in connection with three different case stories. This docume...... contains a short description of each case story, 3-D roughness parameters analysis and relation with the product’s functionality.......This document is used in connection with three exercises of 3 hours duration as a part of the course VISION ONLINE – One week course on Precision & Nanometrology. The exercises concern surface texture analysis for functionality control, in connection with three different case stories. This document...

  20. Subnanomolar Sensitivity of Filter Paper-Based SERS Sensor for Pesticide Detection by Hydrophobicity Change of Paper Surface.

    Science.gov (United States)

    Lee, Minwoo; Oh, Kyudeok; Choi, Han-Kyu; Lee, Sung Gun; Youn, Hye Jung; Lee, Hak Lae; Jeong, Dae Hong

    2018-01-26

    As a cost-effective approach for detecting trace amounts of pesticides, filter paper-based SERS sensors have been the subject of intensive research. One of the hurdles to overcome is the difficulty of retaining nanoparticles on the surface of the paper because of the hydrophilic nature of the cellulose fibers in paper. This reduces the sensitivity and reproducibility of paper-based SERS sensors due to the low density of nanoparticles and short retention time of analytes on the paper surface. In this study, filter paper was treated with alkyl ketene dimer (AKD) to modify its property from hydrophilic to hydrophobic. AKD treatment increased the contact angle of the aqueous silver nanoparticle (AgNP) dispersion, which consequently increased the density of AgNPs. The retention time of the analyte was also increased by preventing its rapid absorption into the filter paper. The SERS signal was strongly enhanced by the increased number of SERS hot spots owing to the increased density of AgNPs on a small contact area of the filter surface. The reproducibility and sensitivity of the SERS signal were optimized by controlling the distribution of AgNPs on the surface of the filter paper by adjusting the concentration of the AgNP solution. Using this SERS sensor with a hydrophobicity-modified filter paper, the spot-to-spot variation of the SERS intensity of 25 spots of 4-aminothiophenol was 6.19%, and the limits of detection of thiram and ferbam as test pesticides were measured to be 0.46 nM and 0.49 nM, respectively. These proof-of-concept results indicate that this paper-based SERS sensor can serve for highly sensitive pesticide detection with low cost and easy fabrication.

  1. Parametric Sensitivity Analysis of the WAVEWATCH III Model

    Directory of Open Access Journals (Sweden)

    Beng-Chun Lee

    2009-01-01

    Full Text Available The parameters in numerical wave models need to be calibrated be fore a model can be applied to a specific region. In this study, we selected the 8 most important parameters from the source term of the WAVEWATCH III model and subjected them to sensitivity analysis to evaluate the sensitivity of the WAVEWATCH III model to the selected parameters to determine how many of these parameters should be considered for further discussion, and to justify the significance priority of each parameter. After ranking each parameter by sensitivity and assessing their cumulative impact, we adopted the ARS method to search for the optimal values of those parameters to which the WAVEWATCH III model is most sensitive by comparing modeling results with ob served data at two data buoys off the coast of north eastern Taiwan; the goal being to find optimal parameter values for improved modeling of wave development. The procedure adopting optimal parameters in wave simulations did improve the accuracy of the WAVEWATCH III model in comparison to default runs based on field observations at two buoys.

  2. Global and Local Sensitivity Analysis Methods for a Physical System

    Science.gov (United States)

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  3. Sensitivity Analysis to Control the Far-Wake Unsteadiness Behind Turbines

    Directory of Open Access Journals (Sweden)

    Esteban Ferrer

    2017-10-01

    Full Text Available We explore the stability of wakes arising from 2D flow actuators based on linear momentum actuator disc theory. We use stability and sensitivity analysis (using adjoints to show that the wake stability is controlled by the Reynolds number and the thrust force (or flow resistance applied through the turbine. First, we report that decreasing the thrust force has a comparable stabilising effect to a decrease in Reynolds numbers (based on the turbine diameter. Second, a discrete sensitivity analysis identifies two regions for suitable placement of flow control forcing, one close to the turbines and one far downstream. Third, we show that adding a localised control force, in the regions identified by the sensitivity analysis, stabilises the wake. Particularly, locating the control forcing close to the turbines results in an enhanced stabilisation such that the wake remains steady for significantly higher Reynolds numbers or turbine thrusts. The analysis of the controlled flow fields confirms that modifying the velocity gradient close to the turbine is more efficient to stabilise the wake than controlling the wake far downstream. The analysis is performed for the first flow bifurcation (at low Reynolds numbers which serves as a foundation of the stabilization technique but the control strategy is tested at higher Reynolds numbers in the final section of the paper, showing enhanced stability for a turbulent flow case.

  4. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis.

    Science.gov (United States)

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-03-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.

  5. Flow cytometric analysis of cell-surface and intracellular antigens in leukemia diagnosis.

    Science.gov (United States)

    Knapp, W; Strobl, H; Majdic, O

    1994-12-15

    New technology allows highly sensitive flow cytometric detection and quantitative analysis of intracellular antigens in normal and malignant hemopoietic cells. With this technology, the earliest stages of myeloid and lymphoid differentiation can easily and reliably be identified using antibodies directed against (pro-)myeloperoxidase/MPO, CD22 and CD3 antigens, respectively. Particularly for the analysis of undifferentiated acute myeloblastic leukemia (AML) cells, the immunological demonstration of intracellular MPO or its enzymatically inactive proforms is highly relevant, since other myeloid marker molecules such as CD33, CD13, or CDw65 are either not restricted to the granulomonocytic lineage or appear later in differentiation. By combining MPO staining with staining for lactoferrin (LF), undifferentiated cells can be distinguished from the granulomonocytic maturation compartment in bone marrow, since LF is selectively expressed from the myelocyte stage of differentiation onward. The list of informative intracellular antigens to be used in leukemia cell analysis will certainly expand in the near future. One candidate, intracellular CD68, has already been tested by us, and results are presented. Also dealt within this article are surface marker molecules not (as yet) widely used in leukemia cell analysis but with the potential to provide important additional information. Among them are the surface structures CD15, CD15s, CDw65, CD79a (MB-1), CD79b (B29), CD87 (uPA-R), and CD117 (c-kit).

  6. A study of charge transfer kinetics in dye-sensitized surface conductivity solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Friedrich, Dennis

    2011-05-15

    The efficiency of the quasi-solid-state dye-sensitized solar cell developed by Junghaenel and Tributsch, the so-called Nano Surface Conductivity Solar Cell (NSCSC), was improved from 2% to 3.5% introducing a compact TiO{sub 2} underlayer, modifying the surface of the mesoporous TiO{sub 2} electrode, optimizing the deposition process of the electrolyte film, and replacing the platinum counter electrode by a carbon layer. Space-resolved photocurrent images revealed the importance of a homogeneous distribution of the electrolyte film. An uneven dispersion led to localized areas of high and low photocurrents, whereas the latter were attributed to an insufficient concentration of the redox couple. Impedance spectroscopy was performed on cells containing different concentrations of the redox couple. By modeling the spectra using an equivalent circuit with a transmission line of resistive and capacitive elements, the characteristic parameters of electron transport in the TiO{sub 2}, such as diffusion length and electron lifetime were obtained. The measurements indicated that the transport of the positive charge to the counter electrode is the main process limiting the efficiency of the cells. Excess charge carrier decay in functioning devices was analyzed by contactless transient photoconductance measurements in the microwave frequency range (TRMC). The lifetime of the photogenerated charge carriers was observed to decrease with increasing applied potential, reaching its maximum close to the opencircuit potential of the cell, where the photocurrent density was minimal, i.e. the potential dependent decay observed was limited by the injection of electrons into the front contact. The functioning of this NSCSC indicated that the transport of the positive charge occurs by solid-state diffusion at the surface of the TiO{sub 2} particles. TRMC measurements on subset devices in the form of sensitized TiO{sub 2} layers revealed charge carrier kinetics strongly dependent on the

  7. Variance estimation for sensitivity analysis of poverty and inequality measures

    Directory of Open Access Journals (Sweden)

    Christian Dudel

    2017-04-01

    Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.

  8. Applying DEA sensitivity analysis to efficiency measurement of Vietnamese universities

    Directory of Open Access Journals (Sweden)

    Thi Thanh Huyen Nguyen

    2015-11-01

    Full Text Available The primary purpose of this study is to measure the technical efficiency of 30 doctorate-granting universities, the universities or the higher education institutes with PhD training programs, in Vietnam, applying the sensitivity analysis of data envelopment analysis (DEA. The study uses eight sets of input-output specifications using the replacement as well as aggregation/disaggregation of variables. The measurement results allow us to examine the sensitivity of the efficiency of these universities with the sets of variables. The findings also show the impact of variables on their efficiency and its “sustainability”.

  9. Code development of total sensitivity and uncertainty analysis for reactor physics calculations

    International Nuclear Information System (INIS)

    Wan, C.; Cao, L.; Wu, H.; Zu, T.; Shen, W.

    2015-01-01

    Sensitivity and uncertainty analysis are essential parts for reactor system to perform risk and policy analysis. In this study, total sensitivity and corresponding uncertainty analysis for responses of neutronics calculations have been accomplished and developed the S&U analysis code named UNICORN. The UNICORN code can consider the implicit effects of multigroup cross sections on the responses. The UNICORN code has been applied to typical pin-cell case in this paper, and can be proved correct by comparison the results with those of the TSUNAMI-1D code. (author)

  10. Code development of total sensitivity and uncertainty analysis for reactor physics calculations

    Energy Technology Data Exchange (ETDEWEB)

    Wan, C.; Cao, L.; Wu, H.; Zu, T., E-mail: chenghuiwan@stu.xjtu.edu.cn, E-mail: caolz@mail.xjtu.edu.cn, E-mail: hongchun@mail.xjtu.edu.cn, E-mail: tiejun@mail.xjtu.edu.cn [Xi' an Jiaotong Univ., School of Nuclear Science and Technology, Xi' an (China); Shen, W., E-mail: Wei.Shen@cnsc-ccsn.gc.ca [Xi' an Jiaotong Univ., School of Nuclear Science and Technology, Xi' an (China); Canadian Nuclear Safety Commission, Ottawa, ON (Canada)

    2015-07-01

    Sensitivity and uncertainty analysis are essential parts for reactor system to perform risk and policy analysis. In this study, total sensitivity and corresponding uncertainty analysis for responses of neutronics calculations have been accomplished and developed the S&U analysis code named UNICORN. The UNICORN code can consider the implicit effects of multigroup cross sections on the responses. The UNICORN code has been applied to typical pin-cell case in this paper, and can be proved correct by comparison the results with those of the TSUNAMI-1D code. (author)

  11. Representation of the Saharan atmospheric boundary layer in the Weather and Research Forecast (WRF) model: A sensitivity analysis.

    Science.gov (United States)

    Todd, Martin; Cavazos, Carolina; Wang, Yi

    2013-04-01

    The Saharan atmospheric boundary layer (SABL) during summer is one of the deepest on Earth, and is crucial in controlling the vertical redistribution and long-range transport of dust in the Sahara. The SABL is typically made up of an actively growing convective layer driven by high sensible heating at the surface, with a deep, near-neutrally stratified Saharan residual layer (SRL) above it, which is mostly well mixed in humidity and temperature and reaches a height of ˜5-6km. These two layers are usually separated by a weak (≤1K) temperature inversion. Model representation of the SPBL structure and evolution is important for accurate weather/climate and aerosol prediction. In this work, we evaluate model performance of the Weather Research and Forecasting (WRF) to represent key multi-scale processes in the SABL during summer 2011, including depiction of the diurnal cycle. For this purpose, a sensitivity analysis is performed to examine the performance of seven PBL schemes (YSU, MYJ, QNSE, MYNN, ACM, Boulac and MRF) and two land-surface model (Noah and RUC) schemes. In addition, the sensitivity to the choice of lateral boundary conditions (ERA-Interim and NCEP) and land use classification maps (USGS and MODIS-based) is tested. Model outputs were confronted upper-air and surface observations from the Fennec super-site at Bordj Moktar and automatic weather station (AWS) in Southern Algeria Vertical profiles of wind speed, potential temperature and water vapour mixing ratio were examined to diagnose differences in PBL heights and model efficacy to reproduce the diurnal cycle of the SABL. We find that the structure of the model SABL is most sensitive the choice of land surface model and lateral boundary conditions and relatively insensitive to the PBL scheme. Overall the model represents well the diurnal cycle in the structure of the SABL. Consistent model biases include (i) a moist (1-2 gkg-1) and slightly cool (~1K) bias in the daytime convective boundary layer (ii

  12. Optimization of dye extraction from Cordyline fruticosa via response surface methodology to produce a natural sensitizer for dye-sensitized solar cells

    Directory of Open Access Journals (Sweden)

    Mahmoud A.M. Al-Alwani

    Full Text Available In the present work, the application of response surface methodology (RSM for the optimization of process parameters in the chlorophyll extraction from Cordyline fruticosa leaves was performed. The absorbance of the extract obtained from the extraction process under different conditions was estimated using the D-optimal design in RSM. Three different process parameters such as the nature of organic solvent based on their boiling point (ethanol, methanol, and acetonitrile, pH (4–8 and extraction temperature (50–90 °C were optimized for chlorophyll extraction. The effects of these parameters on the absorbance or concentration of the extract were evaluated using ANOVA results of quadratic polynomial regression. The results showed a high R2 and adjusted R2 correlation coefficients of 0.9963 and 0.9921 respectively. Moreover, the analysis of the final quadric model based on the design experiments indicated an optimal extraction condition of pH of 7.99, extraction temperature of 78.33 °C, and a solvent boiling point, 78 °C. The predicted absorbance was 1.006, which is in good agreement with the experimentally obtained result of 1.04 at 665 nm wavelength. The application of pigment obtained under the optimal condition was further evaluated as a sensitizer for the dye sensitized solar cells. Maximum solar conversion efficiency (η of 0.5% was achieved for the C. fruticosa leaf extract obtained under the optimum extraction conditions. Furthermore, the exposure of the leaf pigment to 100 mW/cm2 simulated sunlight yielded a short circuit photocurrent density (Isc of 1.3 mA, open circuit voltage (Voc of 616 mV, and a fill factor (ff of 60.16%. Keywords: Optimization, Cordyline fruticosa, Chlorophyll, Process variables, D-optimal design, Solar cells

  13. Sensitivity analysis of hybrid power systems using Power Pinch Analysis considering Feed-in Tariff

    International Nuclear Information System (INIS)

    Mohammad Rozali, Nor Erniza; Wan Alwi, Sharifah Rafidah; Manan, Zainuddin Abdul; Klemeš, Jiří Jaromír

    2016-01-01

    Feed-in Tariff (FiT) has been one of the most effective policies in accelerating the development of renewable energy (RE) projects. The amount of RE electricity in the FiT purchase agreement is an important decision that has to be made by the RE project developers. They have to consider various crucial factors associated with RE system operation as well as its stochastic nature. The presented work aims to assess the sensitivity and profitability of a hybrid power system (HPS) in cases of RE system failure or shutdown. The amount of RE electricity for the FiT purchase agreement in various scenarios was determined using a novel tool called On-Grid Problem Table based on the Power Pinch Analysis (PoPA). A sensitivity table has also been introduced to assist planners to evaluate the effects of the RE system's failure on the profitability of the HPS. This table offers insights on the variance of the RE electricity. The sensitivity analysis of various possible scenarios shows that the RE projects can still provide financial benefits via the FiT, despite the losses incurred from the penalty levied. - Highlights: • A Power Pinch Analysis (PoPA) tool to assess the economics of an HPS with FiT. • The new On-Grid Problem Table for targeting the available RE electricity for FiT sale. • A sensitivity table showing the effect of RE electricity changes on the HPS profitability.

  14. B1 -sensitivity analysis of quantitative magnetization transfer imaging.

    Science.gov (United States)

    Boudreau, Mathieu; Stikov, Nikola; Pike, G Bruce

    2018-01-01

    To evaluate the sensitivity of quantitative magnetization transfer (qMT) fitted parameters to B 1 inaccuracies, focusing on the difference between two categories of T 1 mapping techniques: B 1 -independent and B 1 -dependent. The B 1 -sensitivity of qMT was investigated and compared using two T 1 measurement methods: inversion recovery (IR) (B 1 -independent) and variable flip angle (VFA), B 1 -dependent). The study was separated into four stages: 1) numerical simulations, 2) sensitivity analysis of the Z-spectra, 3) healthy subjects at 3T, and 4) comparison using three different B 1 imaging techniques. For typical B 1 variations in the brain at 3T (±30%), the simulations resulted in errors of the pool-size ratio (F) ranging from -3% to 7% for VFA, and -40% to > 100% for IR, agreeing with the Z-spectra sensitivity analysis. In healthy subjects, pooled whole-brain Pearson correlation coefficients for F (comparing measured double angle and nominal flip angle B 1 maps) were ρ = 0.97/0.81 for VFA/IR. This work describes the B 1 -sensitivity characteristics of qMT, demonstrating that it varies substantially on the B 1 -dependency of the T 1 mapping method. Particularly, the pool-size ratio is more robust against B 1 inaccuracies if VFA T 1 mapping is used, so much so that B 1 mapping could be omitted without substantially biasing F. Magn Reson Med 79:276-285, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  15. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    International Nuclear Information System (INIS)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-01

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  16. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    Energy Technology Data Exchange (ETDEWEB)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-15

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  17. Analysis of the traveltime sensitivity kernels for an acoustic transversely isotropic medium with a vertical axis of symmetry

    KAUST Repository

    Djebbi, Ramzi

    2016-02-05

    In anisotropic media, several parameters govern the propagation of the compressional waves. To correctly invert surface recorded seismic data in anisotropic media, a multi-parameter inversion is required. However, a tradeoff between parameters exists because several models can explain the same dataset. To understand these tradeoffs, diffraction/reflection and transmission-type sensitivity-kernels analyses are carried out. Such analyses can help us to choose the appropriate parameterization for inversion. In tomography, the sensitivity kernels represent the effect of a parameter along the wave path between a source and a receiver. At a given illumination angle, similarities between sensitivity kernels highlight the tradeoff between the parameters. To discuss the parameterization choice in the context of finite-frequency tomography, we compute the sensitivity kernels of the instantaneous traveltimes derived from the seismic data traces. We consider the transmission case with no encounter of an interface between a source and a receiver; with surface seismic data, this corresponds to a diving wave path. We also consider the diffraction/reflection case when the wave path is formed by two parts: one from the source to a sub-surface point and the other from the sub-surface point to the receiver. We illustrate the different parameter sensitivities for an acoustic transversely isotropic medium with a vertical axis of symmetry. The sensitivity kernels depend on the parameterization choice. By comparing different parameterizations, we explain why the parameterization with the normal moveout velocity, the anellipitic parameter η, and the δ parameter is attractive when we invert diving and reflected events recorded in an active surface seismic experiment. © 2016 European Association of Geoscientists & Engineers.

  18. Analysis of platelet eluate for the elucidation of sensitization to HLA in kidney transplant candidate

    Directory of Open Access Journals (Sweden)

    Hugo Mendonça Mundim

    2015-07-01

    Full Text Available While a 42-year-old male patient was being prepared for deceased-donor renal transplantation, anti-HLA-A2 antibodies were detected in the serum by enzyme-linked immunosorbent assay (ELISA method. The patient denied any transfusion history and previous transplant. Crossmatch by complement dependent cytotoxicity (CDC and CDC with anti-human globulin (CDC-AHG proved negative with a four-cell panel with positive typing for HLA-A2. Adsorption of antibodies with platelets and analysis of eluate were suggested to elucidate discrepancies in results by ELISA and by CDC-AHG. ELISA showed that adsorbed serum with platelets did not reveal antibodies for HLA-A2 specificity and suggested that they were removed by their specific binding with HLA-A2 antigens on the platelet surface. Eluate analysis by ELISA showed antibodies for HLA-A2 specificity. No antibodies for HLA-A2 specificity in the non-adsorbed serum were detected by CDC-AHG method. Revision of patient’s data showed that a previous transfusion had occurred, which may have been the source of HLA sensitization. The suggested method may be a contribution towards the evaluation of sensitivity between CDC-AHG and ELISA methods for characterizing antibodies in the patient’s serum.

  19. Meta-analysis of the relative sensitivity of semi-natural vegetation species to ozone

    International Nuclear Information System (INIS)

    Hayes, F.; Jones, M.L.M.; Mills, G.; Ashmore, M.

    2007-01-01

    This study identified 83 species from existing publications suitable for inclusion in a database of sensitivity of species to ozone (OZOVEG database). An index, the relative sensitivity to ozone, was calculated for each species based on changes in biomass in order to test for species traits associated with ozone sensitivity. Meta-analysis of the ozone sensitivity data showed a wide inter-specific range in response to ozone. Some relationships in comparison to plant physiological and ecological characteristics were identified. Plants of the therophyte lifeform were particularly sensitive to ozone. Species with higher mature leaf N concentration were more sensitive to ozone than those with lower leaf N concentration. Some relationships between relative sensitivity to ozone and Ellenberg habitat requirements were also identified. In contrast, no relationships between relative sensitivity to ozone and mature leaf P concentration, Grime's CSR strategy, leaf longevity, flowering season, stomatal density and maximum altitude were found. The relative sensitivity of species and relationships with plant characteristics identified in this study could be used to predict sensitivity to ozone of untested species and communities. - Meta-analysis of the relative sensitivity of semi-natural vegetation species to ozone showed some relationships with physiological and ecological characteristics

  20. Combined use of atomic force microscopy, X-ray photoelectron spectroscopy, and secondary ion mass spectrometry for cell surface analysis.

    Science.gov (United States)

    Dague, Etienne; Delcorte, Arnaud; Latgé, Jean-Paul; Dufrêne, Yves F

    2008-04-01

    Understanding the surface properties of microbial cells is a major challenge of current microbiological research and a key to efficiently exploit them in biotechnology. Here, we used three advanced surface analysis techniques with different sensitivity, probing depth, and lateral resolution, that is, in situ atomic force microscopy, X-ray photoelectron spectroscopy, and secondary ion mass spectrometry, to gain insight into the surface properties of the conidia of the human fungal pathogen Aspergillus fumigatus. We show that the native ultrastructure, surface protein and polysaccharide concentrations, and amino acid composition of three mutants affected in hydrophobin production are markedly different from those of the wild-type, thereby providing novel insight into the cell wall architecture of A. fumigatus. The results demonstrate the power of using multiple complementary techniques for probing microbial cell surfaces.

  1. Code development for eigenvalue total sensitivity analysis and total uncertainty analysis

    International Nuclear Information System (INIS)

    Wan, Chenghui; Cao, Liangzhi; Wu, Hongchun; Zu, Tiejun; Shen, Wei

    2015-01-01

    Highlights: • We develop a new code for total sensitivity and uncertainty analysis. • The implicit effects of cross sections can be considered. • The results of our code agree well with TSUNAMI-1D. • Detailed analysis for origins of implicit effects is performed. - Abstract: The uncertainties of multigroup cross sections notably impact eigenvalue of neutron-transport equation. We report on a total sensitivity analysis and total uncertainty analysis code named UNICORN that has been developed by applying the direct numerical perturbation method and statistical sampling method. In order to consider the contributions of various basic cross sections and the implicit effects which are indirect results of multigroup cross sections through resonance self-shielding calculation, an improved multigroup cross-section perturbation model is developed. The DRAGON 4.0 code, with application of WIMSD-4 format library, is used by UNICORN to carry out the resonance self-shielding and neutron-transport calculations. In addition, the bootstrap technique has been applied to the statistical sampling method in UNICORN to obtain much steadier and more reliable uncertainty results. The UNICORN code has been verified against TSUNAMI-1D by analyzing the case of TMI-1 pin-cell. The numerical results show that the total uncertainty of eigenvalue caused by cross sections can reach up to be about 0.72%. Therefore the contributions of the basic cross sections and their implicit effects are not negligible

  2. Derivation of the reduced reaction mechanisms of ozone depletion events in the Arctic spring by using concentration sensitivity analysis and principal component analysis

    Directory of Open Access Journals (Sweden)

    L. Cao

    2016-12-01

    Full Text Available The ozone depletion events (ODEs in the springtime Arctic have been investigated since the 1980s. It is found that the depletion of ozone is highly associated with an auto-catalytic reaction cycle, which involves mostly the bromine-containing compounds. Moreover, bromide stored in various substrates in the Arctic such as the underlying surface covered by ice and snow can be also activated by the absorbed HOBr. Subsequently, this leads to an explosive increase of the bromine amount in the troposphere, which is called the “bromine explosion mechanism”. In the present study, a reaction scheme representing the chemistry of ozone depletion and halogen release is processed with two different mechanism reduction approaches, namely, the concentration sensitivity analysis and the principal component analysis. In the concentration sensitivity analysis, the interdependence of the mixing ratios of ozone and principal bromine species on the rate of each reaction in the ODE mechanism is identified. Furthermore, the most influential reactions in different time periods of ODEs are also revealed. By removing 11 reactions with the maximum absolute values of sensitivities lower than 10 %, a reduced reaction mechanism of ODEs is derived. The onsets of each time period of ODEs in simulations using the original reaction mechanism and the reduced reaction mechanism are identical while the maximum deviation of the mixing ratio of principal bromine species between different mechanisms is found to be less than 1 %. By performing the principal component analysis on an array of the sensitivity matrices, the dependence of a particular species concentration on a combination of the reaction rates in the mechanism is revealed. Redundant reactions are indicated by principal components corresponding to small eigenvalues and insignificant elements in principal components with large eigenvalues. Through this investigation, aside from the 11 reactions identified as

  3. Development of a System Analysis Toolkit for Sensitivity Analysis, Uncertainty Propagation, and Estimation of Parameter Distribution

    International Nuclear Information System (INIS)

    Heo, Jaeseok; Kim, Kyung Doo

    2015-01-01

    Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM

  4. Development of a System Analysis Toolkit for Sensitivity Analysis, Uncertainty Propagation, and Estimation of Parameter Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Jaeseok; Kim, Kyung Doo [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM.

  5. Monte Carlo sensitivity analysis of an Eulerian large-scale air pollution model

    International Nuclear Information System (INIS)

    Dimov, I.; Georgieva, R.; Ostromsky, Tz.

    2012-01-01

    Variance-based approaches for global sensitivity analysis have been applied and analyzed to study the sensitivity of air pollutant concentrations according to variations of rates of chemical reactions. The Unified Danish Eulerian Model has been used as a mathematical model simulating a remote transport of air pollutants. Various Monte Carlo algorithms for numerical integration have been applied to compute Sobol's global sensitivity indices. A newly developed Monte Carlo algorithm based on Sobol's quasi-random points MCA-MSS has been applied for numerical integration. It has been compared with some existing approaches, namely Sobol's ΛΠ τ sequences, an adaptive Monte Carlo algorithm, the plain Monte Carlo algorithm, as well as, eFAST and Sobol's sensitivity approaches both implemented in SIMLAB software. The analysis and numerical results show advantages of MCA-MSS for relatively small sensitivity indices in terms of accuracy and efficiency. Practical guidelines on the estimation of Sobol's global sensitivity indices in the presence of computational difficulties have been provided. - Highlights: ► Variance-based global sensitivity analysis is performed for the air pollution model UNI-DEM. ► The main effect of input parameters dominates over higher-order interactions. ► Ozone concentrations are influenced mostly by variability of three chemical reactions rates. ► The newly developed MCA-MSS for multidimensional integration is compared with other approaches. ► More precise approaches like MCA-MSS should be applied when the needed accuracy has not been achieved.

  6. Direct Surface and Droplet Microsampling for Electrospray Ionization Mass Spectrometry Analysis with an Integrated Dual-Probe Microfluidic Chip

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Cong-Min [Institute of Microanalytical; Zhu, Ying [Institute of Microanalytical; Jin, Di-Qiong [Institute of Microanalytical; Kelly, Ryan T. [Environmental; Fang, Qun [Institute of Microanalytical

    2017-08-15

    Ambient mass spectrometry (MS) has revolutionized the way of MS analysis and broadened its application in various fields. This paper describes the use of microfluidic techniques to simplify the setup and improve the functions of ambient MS by integrating the sampling probe, electrospray emitter probe, and online mixer on a single glass microchip. Two types of sampling probes, including a parallel-channel probe and a U-shaped channel probe, were designed for dryspot and liquid-phase droplet samples, respectively. We demonstrated that the microfabrication techniques not only enhanced the capability of ambient MS methods in analysis of dry-spot samples on various surfaces, but also enabled new applications in the analysis of nanoliter-scale chemical reactions in an array of droplets. The versatility of the microchip-based ambient MS method was demonstrated in multiple different applications including evaluation of residual pesticide on fruit surfaces, sensitive analysis of low-ionizable analytes using postsampling derivatization, and high-throughput screening of Ugi-type multicomponent reactions.

  7. Nordic reference study on uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Hirschberg, S.; Jacobsson, P.; Pulkkinen, U.; Porn, K.

    1989-01-01

    This paper provides a review of the first phase of Nordic reference study on uncertainty and sensitivity analysis. The main objective of this study is to use experiences form previous Nordic Benchmark Exercises and reference studies concerning critical modeling issues such as common cause failures and human interactions, and to demonstrate the impact of associated uncertainties on the uncertainty of the investigated accident sequence. This has been done independently by three working groups which used different approaches to modeling and to uncertainty analysis. The estimated uncertainty interval for the analyzed accident sequence is large. Also the discrepancies between the groups are substantial but can be explained. Sensitivity analyses which have been carried out concern e.g. use of different CCF-quantification models, alternative handling of CCF-data, time windows for operator actions and time dependences in phase mission operation, impact of state-of-knowledge dependences and ranking of dominating uncertainty contributors. Specific findings with respect to these issues are summarized in the paper

  8. Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs

    OpenAIRE

    Zuidwijk, Rob

    2005-01-01

    textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an optimal solution are investigated, and the optimal solution is studied on a so-called critical range of the initial data, in which certain properties such as the optimal basis in linear programming are ...

  9. Global sensitivity analysis of Alkali-Surfactant-Polymer enhanced oil recovery processes

    Energy Technology Data Exchange (ETDEWEB)

    Carrero, Enrique; Queipo, Nestor V.; Pintos, Salvador; Zerpa, Luis E. [Applied Computing Institute, Faculty of Engineering, University of Zulia, Zulia (Venezuela)

    2007-08-15

    After conventional waterflooding processes the residual oil in the reservoir remains as a discontinuous phase in the form of oil drops trapped by capillary forces and is likely to be around 70% of the original oil in place (OOIP). The EOR method so-called Alkaline-Surfactant-Polymer (ASP) flooding has been proved to be effective in reducing the oil residual saturation in laboratory experiments and field projects through reduction of interfacial tension and mobility ratio between oil and water phases. A critical step for the optimal design and control of ASP recovery processes is to find the relative contributions of design variables such as, slug size and chemical concentrations, in the variability of given performance measures (e.g., net present value, cumulative oil recovery), considering a heterogeneous and multiphase petroleum reservoir (sensitivity analysis). Previously reported works using reservoir numerical simulation have been limited to local sensitivity analyses because a global sensitivity analysis may require hundreds or even thousands of computationally expensive evaluations (field scale numerical simulations). To overcome this issue, a surrogate-based approach is suggested. Surrogate-based analysis/optimization makes reference to the idea of constructing an alternative fast model (surrogate) from numerical simulation data and using it for analysis/optimization purposes. This paper presents an efficient global sensitivity approach based on Sobol's method and multiple surrogates (i.e., Polynomial Regression, Kriging, Radial Base Functions and a Weighed Adaptive Model), with the multiple surrogates used to address the uncertainty in the analysis derived from plausible alternative surrogate-modeling schemes. The proposed approach was evaluated in the context of the global sensitivity analysis of a field scale Alkali-Surfactant-Polymer flooding process. The design variables and the performance measure in the ASP process were selected as slug size

  10. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    Science.gov (United States)

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  11. Surface structure analysis by means of Rutherford scattering: methods to study surface relaxation

    International Nuclear Information System (INIS)

    Turkenburg, W.C.; Soszka, W.; Saris, F.W.; Kersten, H.H.; Colenbrander, B.G.

    1976-01-01

    The use of Rutherford backscattering for structural analysis of single crystal surfaces is reviewed, and a new method is introduced. With this method, which makes use of the channeling and blocking phenomenon of light ions of medium energy, surface atoms can be located with a precision of 0.02 A. This is demonstrated in a measurement of surface relaxation for the Cu(110) surface. (Auth.)

  12. Sensitivity of LUCC on the Surface Temperature of Tibetan Plateau

    Science.gov (United States)

    Qi, W.; Deng, X.; Wu, F.

    2016-12-01

    The Tibetan Plateau has an important effect on the ecological security in China, even in Asia, which makes the region become the hot spot in recently research. Under the joint influence of global change and human activities, ecosystem destabilizing and the increasing pressure on resources and environment emerge on the Tibetan Plateau, but the potential spatial sensitivity of land use and land cover changes(LUCC) on surface temperature has not been quantitatively analyzed. This study analyzed the mainly types of LUCC, urbanization, grassland degradation, deforestation on Tibetan Plateau along with Representative Concentration Pathways (RCPs) of the Intergovernmental Panel on Climate Change (IPCC). The LUCC in recent decades was first quantitatively analyzed in this study to give the basic fact with a significant increase in temperatures, reduced precipitation and increased evaporation. This study focused on the future spatio-temporal heterogeneity of the temperature and precipitation. Finally, the influencing factors with LUCC on Tibetan Plateau were simulated with the Weather Research and Forecasting (WRF) model, and the sensitivity of different land use types was spatially analyzed with Singular Value Decomposition (SVD). The results indicate that the large-area alpine grassland plays a more important role in alleviating global warming than other vegetation types do. The changes of the landscape structure resulting from the urban expansion play a significant role in intensifying regional temperature increase. In addition, the effects of LUCC on monthly average temperature change would vary from month to month with obviously spatial heterogeneity.

  13. Improved performance of dye-sensitized solar cells with surface-treated TiO2 as a photoelectrode

    International Nuclear Information System (INIS)

    Park, Su Kyung; Chung, Chinkap; Kim, Dae-Hwan; Kim, Cham; Lee, Sang-Ju; Han, Yoon Soo

    2012-01-01

    We report on the effects of surface-modified TiO 2 on the performance of dye-sensitized solar cells (DSSCs). TiO 2 surface was modified with Na 2 CO 3 via a simple dip coating process and the modified TiO 2 was applied to photoelectrodes of DSSCs. By dipping of TiO 2 layer into aqueous Na 2 CO 3 solution, the DSSC showed a power conversion efficiency of 9.98%, compared to that (7.75%) of the reference device without surface treatment. The UV–vis absorption spectra, the impedance spectra and the dark current studies revealed that the increase of all parameters was attributed to the enhanced dye adsorption, the prolonged electron lifetime and the reduced interfacial resistance.

  14. SECTION 6.2 SURFACE TOPOGRAPHY ANALYSIS

    DEFF Research Database (Denmark)

    Seah, M. P.; De Chiffre, Leonardo

    2005-01-01

    Surface physical analysis, i.e. topography characterisation, encompasses measurement, visualisation, and quantification. This is critical for both component form and for surface finish at macro-, micro- and nano-scales. The principal methods of surface topography measurement are stylus profilometry......, optical scanning techniques, and scanning probe microscopy (SPM). These methods, based on acquisition of topography data from point by point scans, give quantitative information of heights with respect to position. Based on a different approach, the so-called integral methods produce parameters...

  15. Enhanced Sensitivity of Surface Acoustic Wave-Based Rate Sensors Incorporating Metallic Dot Arrays

    Directory of Open Access Journals (Sweden)

    Wen Wang

    2014-02-01

    Full Text Available A new surface acoustic wave (SAW-based rate sensor pattern incorporating metallic dot arrays was developed in this paper. Two parallel SAW delay lines with a reverse direction and an operation frequency of 80 MHz on a same X-112°Y LiTaO3 wafer are fabricated as the feedback of two SAW oscillators, and mixed oscillation frequency was used to characterize the external rotation. To enhance the Coriolis force effect acting on the SAW propagation, a copper (Cu dot array was deposited along the SAW propagation path of the SAW devices. The approach of partial-wave analysis in layered media was referred to analyze the response mechanisms of the SAW based rate sensor, resulting in determination of the optimal design parameters. To improve the frequency stability of the oscillator, the single phase unidirectional transducers (SPUDTs and combed transducer were used to form the SAW device to minimize the insertion loss and accomplish the single mode selection, respectively. Excellent long-term (measured in hours frequency stability of 0.1 ppm/h was obtained. Using the rate table with high precision, the performance of the developed SAW rate sensor was evaluated experimentally; satisfactory detection sensitivity (16.7 Hz∙deg∙s−1 and good linearity were observed.

  16. Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy.

    Science.gov (United States)

    Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker

    2015-01-01

    The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy.

  17. In Situ Mapping of the Molecular Arrangement of Amphiphilic Dye Molecules at the TiO 2 Surface of Dye-Sensitized Solar Cells

    KAUST Repository

    Voï tchovsky, Kislon; Ashari-Astani, Negar; Tavernelli, Ivano; Té treault, Nicolas; Rothlisberger, Ursula; Stellacci, Francesco; Grä tzel, Michael; Harms, Hauke A.

    2015-01-01

    © 2015 American Chemical Society. Amphiphilic sensitizers are central to the function of dye-sensitized solar cells. It is known that the cell's performance depends on the molecular arrangement and the density of the dye on the semiconductor surface

  18. Latex allergy: new insights to explain different sensitization profiles in different risk groups.

    Science.gov (United States)

    Peixinho, C; Tavares-Ratado, P; Tomás, M R; Taborda-Barata, L; Tomaz, C T

    2008-07-01

    Differences in latex allergen sensitization profiles have been described between children subjected to repetitive surgical interventions and health care workers (HCW). 'Major' allergens for patients with spina bifida are Hev b 1, 3 and 7, while for HCW, 'major' allergens are Hev b 2, 5, 6.01 and 13. The reason for these differential sensitization profiles is currently unknown. To investigate latex allergen profiles on internal and external surfaces of natural rubber latex gloves. Eighty-two samples of commonly used surgical gloves (41 glove brands) were used for analysis. Specific allergen levels of Hev b 1, 3, 5 and 6.02 on both surfaces of the gloves were quantified using an enzyme immunometric assay, a FITkit (FIT Biotech, Tampere, Finland). Differences in allergen levels were observed between internal and external surfaces of all glove types. Concentrations of Hev b 1 and Hev b 3 were significantly higher on external surfaces, while internal surfaces had higher allergen levels of Hev b 5 and Hev b 6.02. Analysis of surgical and examination gloves, powdered and nonpowdered gloves also showed that the content of Hev b 5 and Hev b 6.02 was significantly higher on internal surfaces while that of Hev b 1 and Hev b 3 was higher on external surfaces. Our study showed different allergen profiles on internal and external surfaces of natural rubber latex gloves. These results may suggest a relationship between latex allergen localization and sensitization routes in different risk groups.

  19. Optimizing human activity patterns using global sensitivity analysis.

    Science.gov (United States)

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  20. Sensitivity analysis overlaps of friction elements in cartridge seals

    Directory of Open Access Journals (Sweden)

    Žmindák Milan

    2018-01-01

    Full Text Available Cartridge seals are self-contained units consisting of a shaft sleeve, seals, and gland plate. The applications of mechanical seals are numerous. The most common example of application is in bearing production for automobile industry. This paper deals with the sensitivity analysis of overlaps friction elements in cartridge seal and their influence on the friction torque sealing and compressive force. Furthermore, it describes materials for the manufacture of sealings, approaches usually used to solution of hyperelastic materials by FEM and short introduction into the topic wheel bearings. The practical part contains one of the approach for measurement friction torque, which results were used to specifying the methodology and precision of FEM calculation realized by software ANSYS WORKBENCH. This part also contains the sensitivity analysis of overlaps friction elements.