WorldWideScience

Sample records for preliminary sensitivity analysis

  1. Development of the GOSAT-2 FTS-2 Simulator and Preliminary Sensitivity Analysis for CO2 Retrieval

    Science.gov (United States)

    Kamei, A.; Yoshida, Y.; Dupuy, E.; Hiraki, K.; Yokota, Y.; Oishi, Y.; Murakami, K.; Morino, I.; Matsunaga, T.

    2013-12-01

    The Greenhouse Gases Observing Satellite-2 (GOSAT-2), which is a successor mission to the GOSAT, is planned to be launched in FY 2017. The Fourier Transform Spectrometer-2 (FTS-2) onboard the GOSAT-2 is a primary sensor to observe infrared light reflected and emitted from the Earth's surface and atmosphere. The FTS-2 obtains high-spectral resolution spectra with four bands from near to short-wavelength infrared (SWIR) region and one band in the thermal infrared (TIR) region. The column amounts of carbon dioxide (CO2) and methane (CH4) are retrieved from the obtained radiance spectra with SWIR bands. Compared to the FTS onboard the GOSAT, the FTS-2 includes an additional SWIR band to allow for carbon monoxide (CO) measurement. We have been developing a tool, named GOSAT-2 FTS-2 simulator, which is capable of simulating the spectral radiance data observed by the FTS-2 using the Pstar2 radiative transfer code. The purpose of the GOSAT-2 FTS-2 simulator is to obtain data which is exploited in the sensor specification, the optimization of parameters for Level 1 processing, and the improvement of Level 2 algorithms. The GOSAT-2 FTS-2 simulator, composed of the six components: 1) Overall control, 2) Onboarding platform, 3) Spectral radiance calculation, 4) Fourier transform, 5) L1B processing, and 6) L1B data output, has been installed on the GOSAT Research Computation Facility (GOSAT RCF), which is a large-scale, high-performance, and energy-efficient computer. We present the progress in the development of the GOSAT-2 FTS-2 simulator and the preliminary sensitivity analysis, relating to the engineering parameters, the aerosols and clouds, and so on, on the Level 1 processing for CO2 retrieval from the obtained data by simulating the FTS-2 SWIR observation using the GOSAT-2 FTS-2 simulator.

  2. A Sensitivity Study for an Evaluation of Input Parameters Effect on a Preliminary Probabilistic Tsunami Hazard Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rhee, Hyun-Me; Kim, Min Kyu; Choi, In-Kil [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Sheen, Dong-Hoon [Chonnam National University, Gwangju (Korea, Republic of)

    2014-10-15

    The tsunami hazard analysis has been based on the seismic hazard analysis. The seismic hazard analysis has been performed by using the deterministic method and the probabilistic method. To consider the uncertainties in hazard analysis, the probabilistic method has been regarded as attractive approach. The various parameters and their weight are considered by using the logic tree approach in the probabilistic method. The uncertainties of parameters should be suggested by analyzing the sensitivity because the various parameters are used in the hazard analysis. To apply the probabilistic tsunami hazard analysis, the preliminary study for the Ulchin NPP site had been performed. The information on the fault sources which was published by the Atomic Energy Society of Japan (AESJ) had been used in the preliminary study. The tsunami propagation was simulated by using the TSUNAMI{sub 1}.0 which was developed by Japan Nuclear Energy Safety Organization (JNES). The wave parameters have been estimated from the result of tsunami simulation. In this study, the sensitivity analysis for the fault sources which were selected in the previous studies has been performed. To analyze the effect of the parameters, the sensitivity analysis for the E3 fault source which was published by AESJ was performed. The effect of the recurrence interval, the potential maximum magnitude, and the beta were suggested by the sensitivity analysis results. Level of annual exceedance probability has been affected by the recurrence interval.. Wave heights have been influenced by the potential maximum magnitude and the beta. In the future, the sensitivity analysis for the all fault sources in the western part of Japan which were published AESJ would be performed.

  3. Sensitivity analysis

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...

  4. Analysis of clonogenic human brain tumour cells: preliminary results of tumour sensitivity testing with BCNU

    Energy Technology Data Exchange (ETDEWEB)

    Rosenblum, M L; Dougherty, D A; Deen, D F; Hoshino, T; Wilson, C B [California Univ., San Francisco (USA). Dept. of Neurology

    1980-04-01

    Biopsies from 6 patients with glioblastoma multiforme were disaggregated and single cells were treated in vitro with various concentrations of 1,3-bis(2-chloroethyl)-1-nitroso urea (BCNU) and plated for cell survival. One patient's cells were sensitive to BCNU in vitro; after a single dose of BCNU her brain scan reverted to normal and she was clinically well. Five tumours demonstrated resistance in vitro. Three of these tumours progressed during the first course of chemotherapy with a nitrosourea and the patients died at 21/2, 4 and 81/2 months after operation. Two patients who showed dramatic responses to radiation therapy were considered unchanged after the first course of nitrosourea therapy (although one demonstrated tumour enlargement on brain scan). The correlation of in vitro testing of tumour cell sensitivity with actual patient response is encouraging enough to warrant further work to determine whether such tests should weigh in decisions on patient therapy.

  5. Quantitative wound healing studies using a portable, low cost, handheld near-infrared optical scanner: preliminary sensitivity and specificity analysis

    Science.gov (United States)

    Lei, Jiali; Rodriguez, Suset; Jayachandran, Maanasa; Solis, Elizabeth; Gonzalez, Stephanie; Perez-Clavijo, Francesco; Wigley, Stephen; Godavarty, Anuradha

    2016-03-01

    Lower extremity ulcers are devastating complications that are still un-recognized. To date, clinicians employ visual inspection of the wound site during its standard 4-week of healing process via monitoring of surface granulation. A novel ultra-portable near-infrared optical scanner (NIROS) has been developed at the Optical Imaging Laboratory that can perform non-contact 2D area imaging of the wound site. From preliminary studies it was observed that the nonhealing wounds had a greater absorption contrast with respect to the normal site, unlike in the healing wounds. Currently, non-contact near-infrared (NIR) imaging studies were carried out on 22 lower extremity wounds at two podiatric clinics, and the sensitivity and specificity of the scanner evaluated. A quantitative optical biometric was developed that differentiates healing from non-healing wounds, based on the threshold values obtained during ROC analysis. In addition, optical images of the wound obtained from weekly imaging studies are also assessed to determine the ability of the device to predict wound healing consistently on a periodic basis. This can potentially impact early intervention in the treatment of lower extremity ulcers when an objective and quantitative wound healing approach is developed. Lastly, the incorporation of MATLAB graphical user interface (GUI) to automate the process of image acquisition, image processing and image analysis realizes the potential of NIROS to perform non-contact and real-time imaging on lower extremity wounds.

  6. A Preliminary Study on Sensitivity and Uncertainty Analysis with Statistic Method: Uncertainty Analysis with Cross Section Sampling from Lognormal Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis.

  7. A Preliminary Study on Sensitivity and Uncertainty Analysis with Statistic Method: Uncertainty Analysis with Cross Section Sampling from Lognormal Distribution

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2013-01-01

    The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis

  8. UVISS preliminary visibility analysis

    DEFF Research Database (Denmark)

    Betto, Maurizio

    1998-01-01

    The goal of this work is to obtain a preliminary assessment of the sky visibility for anastronomical telescope located on the express pallet of the International SpaceStation (ISS)} taking into account the major constraints imposed on the instrument by the ISSattitude and structure. Part of the w......The goal of this work is to obtain a preliminary assessment of the sky visibility for anastronomical telescope located on the express pallet of the International SpaceStation (ISS)} taking into account the major constraints imposed on the instrument by the ISSattitude and structure. Part...... of the work is also to setup the kernel of a software tool for the visibility analysis thatshould be easily expandable to consider more complex strucures for future activities.This analysis is part of the UVISS assessment study and it is meant to provide elementsfor the definition and the selection...

  9. Purification, crystallization and preliminary crystallographic analysis of Est25: a ketoprofen-specific hormone-sensitive lipase

    International Nuclear Information System (INIS)

    Kim, SeungBum; Joo, Sangbum; Yoon, Hyun C.; Ryu, Yeonwoo; Kim, Kyeong Kyu; Kim, T. Doohun

    2007-01-01

    Est25, a ketoprofen-specific hormone-sensitive lipase from a metagenomic library, was crystallized and diffraction data were collected to 1.49 Å resolution. Ketoprofen, a nonsteroidal anti-inflammatory drug, inhibits the synthesis of prostaglandin. A novel hydrolase (Est25) with high ketoprofen specificity has previously been identified using a metagenomic library from environmental samples. Recombinant Est25 protein with a histidine tag at the N-terminus was expressed in Escherichia coli and purified in a homogenous form. Est25 was crystallized from 2.4 M sodium malonate pH 7.0 and X-ray diffraction data were collected to 1.49 Å using synchrotron radiation. The crystals belong to the monoclinic space group C2, with unit-cell parameters a = 197.8, b = 95.2, c = 99.4 Å, β = 97.1°

  10. Pickering safeguards: a preliminary analysis

    International Nuclear Information System (INIS)

    Todd, J.L.; Hodgkinson, J.G.

    1977-05-01

    A summary is presented of thoughts relative to a systems approach for implementing international safeguards. Included is a preliminary analysis of the Pickering Generating Station followed by a suggested safeguards system for the facility

  11. Sensitivity and uncertainty analysis

    CERN Document Server

    Cacuci, Dan G; Navon, Ionel Michael

    2005-01-01

    As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c

  12. Sensitivity of permanent meadows areas. Preliminary report

    International Nuclear Information System (INIS)

    Pourcelot, L.

    2006-01-01

    The objective of this work proposed in the framework of the S.E.N.S.I.B. project on the permanent meadows areas is to compare the sensitivity of these surfaces and cheese maker sector which are associated with them, and to identify the processes which determine this sensitivity. The realization of this objective will lean on certain data of activities acquired recently by the I.R.S.N. (compartments: soils, herb, milk and cheese), as well as on new measures, on three sites: Saint-Laurent-de-Geris ( West), Beaune-Le-Froid ( center), and Massif of Jura (East). Two scales of transfers observation and sensitivity are retained. In a first time, the sensitivity of meadows areas will studied at the regional scale, from data acquired in the Massif du Jura ( between 300 and 1200 metres up). In a second time, a study will allow to compare the sensitivity of three areas retained (West, Center, East) at the national scale. The expertise will be focused on the factors of sensitivity of the transfer soil-herb and the factors of sensitivity bound to the cheese makers sectors. The data of the already available zones of study create strong variabilities of the rates of 137 Cs transfer in the interfaces soil / herb and herb / milk, letting suppose that the nature of soils, the quantity and the quality of food ingested by the cattle constitutes dominating factors of sensitivity. Besides, the size of the basin of milk collection, extremely variable from a site of study in the other one, has to influence the contamination of cheeses and as such, it establishes an important factor of sensitivity of the cheese makers sector. (N.C.)

  13. Sensitivity analysis of EQ3

    International Nuclear Information System (INIS)

    Horwedel, J.E.; Wright, R.Q.; Maerker, R.E.

    1990-01-01

    A sensitivity analysis of EQ3, a computer code which has been proposed to be used as one link in the overall performance assessment of a national high-level waste repository, has been performed. EQ3 is a geochemical modeling code used to calculate the speciation of a water and its saturation state with respect to mineral phases. The model chosen for the sensitivity analysis is one which is used as a test problem in the documentation of the EQ3 code. Sensitivities are calculated using both the CHAIN and ADGEN options of the GRESS code compiled under G-float FORTRAN on the VAX/VMS and verified by perturbation runs. The analyses were performed with a preliminary Version 1.0 of GRESS which contains several new algorithms that significantly improve the application of ADGEN. Use of ADGEN automates the implementation of the well-known adjoint technique for the efficient calculation of sensitivities of a given response to all the input data. Application of ADGEN to EQ3 results in the calculation of sensitivities of a particular response to 31,000 input parameters in a run time of only 27 times that of the original model. Moreover, calculation of the sensitivities for each additional response increases this factor by only 2.5 percent. This compares very favorably with a running-time factor of 31,000 if direct perturbation runs were used instead. 6 refs., 8 tabs

  14. Preliminary HECTOR analysis by Dragon

    Energy Technology Data Exchange (ETDEWEB)

    Presser, W; Woloch, F

    1972-06-02

    From the different cores measured in HECTOR, only ACH 4/B-B was selected for the Dragon analysis, since it presented the largest amount of uniform fuel loading in the central test region and is therefore nearest to an infinite lattice. Preliminary results are discussed.

  15. WHAT IF (Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Iulian N. BUJOREANU

    2011-01-01

    Full Text Available Sensitivity analysis represents such a well known and deeply analyzed subject that anyone to enter the field feels like not being able to add anything new. Still, there are so many facets to be taken into consideration.The paper introduces the reader to the various ways sensitivity analysis is implemented and the reasons for which it has to be implemented in most analyses in the decision making processes. Risk analysis is of outmost importance in dealing with resource allocation and is presented at the beginning of the paper as the initial cause to implement sensitivity analysis. Different views and approaches are added during the discussion about sensitivity analysis so that the reader develops an as thoroughly as possible opinion on the use and UTILITY of the sensitivity analysis. Finally, a round-up conclusion brings us to the question of the possibility of generating the future and analyzing it before it unfolds so that, when it happens it brings less uncertainty.

  16. Sensitivity Analysis Without Assumptions.

    Science.gov (United States)

    Ding, Peng; VanderWeele, Tyler J

    2016-05-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.

  17. Concept Overview & Preliminary Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, Mark

    2017-07-12

    'H2@Scale' is an opportunity for wide-scale use of hydrogen as an intermediate that carries energy from various production options to multiple uses. It is based on identifying and developing opportunities for low-cost hydrogen production and investigating opportunities for using that hydrogen across the electricity, industrial, and transportation sectors. One of the key production opportunities is use of low-cost electricity that may be generated under high penetrations of variable renewable generators such as wind and solar photovoltaics. The technical potential demand for hydrogen across the sectors is 60 million metric tons per year. The U.S. has sufficient domestic renewable resources so that each could meet that demand and could readily meet the demand using a portfolio of generation options. This presentation provides an overview of the concept and the technical potential demand and resources. It also motivates analysis and research on H2@Scale.

  18. Interference and Sensitivity Analysis.

    Science.gov (United States)

    VanderWeele, Tyler J; Tchetgen Tchetgen, Eric J; Halloran, M Elizabeth

    2014-11-01

    Causal inference with interference is a rapidly growing area. The literature has begun to relax the "no-interference" assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted.

  19. Preliminary hazards analysis -- vitrification process

    International Nuclear Information System (INIS)

    Coordes, D.; Ruggieri, M.; Russell, J.; TenBrook, W.; Yimbo, P.

    1994-06-01

    This paper presents a Preliminary Hazards Analysis (PHA) for mixed waste vitrification by joule heating. The purpose of performing a PHA is to establish an initial hazard categorization for a DOE nuclear facility and to identify those processes and structures which may have an impact on or be important to safety. The PHA is typically performed during and provides input to project conceptual design. The PHA is then followed by a Preliminary Safety Analysis Report (PSAR) performed during Title 1 and 2 design. The PSAR then leads to performance of the Final Safety Analysis Report performed during the facility's construction and testing. It should be completed before routine operation of the facility commences. This PHA addresses the first four chapters of the safety analysis process, in accordance with the requirements of DOE Safety Guidelines in SG 830.110. The hazards associated with vitrification processes are evaluated using standard safety analysis methods which include: identification of credible potential hazardous energy sources; identification of preventative features of the facility or system; identification of mitigative features; and analyses of credible hazards. Maximal facility inventories of radioactive and hazardous materials are postulated to evaluate worst case accident consequences. These inventories were based on DOE-STD-1027-92 guidance and the surrogate waste streams defined by Mayberry, et al. Radiological assessments indicate that a facility, depending on the radioactive material inventory, may be an exempt, Category 3, or Category 2 facility. The calculated impacts would result in no significant impact to offsite personnel or the environment. Hazardous materials assessment indicates that a Mixed Waste Vitrification facility will be a Low Hazard facility having minimal impacts to offsite personnel and the environment

  20. Preliminary hazards analysis -- vitrification process

    Energy Technology Data Exchange (ETDEWEB)

    Coordes, D.; Ruggieri, M.; Russell, J.; TenBrook, W.; Yimbo, P. [Science Applications International Corp., Pleasanton, CA (United States)

    1994-06-01

    This paper presents a Preliminary Hazards Analysis (PHA) for mixed waste vitrification by joule heating. The purpose of performing a PHA is to establish an initial hazard categorization for a DOE nuclear facility and to identify those processes and structures which may have an impact on or be important to safety. The PHA is typically performed during and provides input to project conceptual design. The PHA is then followed by a Preliminary Safety Analysis Report (PSAR) performed during Title 1 and 2 design. The PSAR then leads to performance of the Final Safety Analysis Report performed during the facility`s construction and testing. It should be completed before routine operation of the facility commences. This PHA addresses the first four chapters of the safety analysis process, in accordance with the requirements of DOE Safety Guidelines in SG 830.110. The hazards associated with vitrification processes are evaluated using standard safety analysis methods which include: identification of credible potential hazardous energy sources; identification of preventative features of the facility or system; identification of mitigative features; and analyses of credible hazards. Maximal facility inventories of radioactive and hazardous materials are postulated to evaluate worst case accident consequences. These inventories were based on DOE-STD-1027-92 guidance and the surrogate waste streams defined by Mayberry, et al. Radiological assessments indicate that a facility, depending on the radioactive material inventory, may be an exempt, Category 3, or Category 2 facility. The calculated impacts would result in no significant impact to offsite personnel or the environment. Hazardous materials assessment indicates that a Mixed Waste Vitrification facility will be a Low Hazard facility having minimal impacts to offsite personnel and the environment.

  1. Beyond sensitivity analysis

    DEFF Research Database (Denmark)

    Lund, Henrik; Sorknæs, Peter; Mathiesen, Brian Vad

    2018-01-01

    of electricity, which have been introduced in recent decades. These uncertainties pose a challenge to the design and assessment of future energy strategies and investments, especially in the economic assessment of renewable energy versus business-as-usual scenarios based on fossil fuels. From a methodological...... point of view, the typical way of handling this challenge has been to predict future prices as accurately as possible and then conduct a sensitivity analysis. This paper includes a historical analysis of such predictions, leading to the conclusion that they are almost always wrong. Not only...... are they wrong in their prediction of price levels, but also in the sense that they always seem to predict a smooth growth or decrease. This paper introduces a new method and reports the results of applying it on the case of energy scenarios for Denmark. The method implies the expectation of fluctuating fuel...

  2. Chemical kinetic functional sensitivity analysis: Elementary sensitivities

    International Nuclear Information System (INIS)

    Demiralp, M.; Rabitz, H.

    1981-01-01

    Sensitivity analysis is considered for kinetics problems defined in the space--time domain. This extends an earlier temporal Green's function method to handle calculations of elementary functional sensitivities deltau/sub i//deltaα/sub j/ where u/sub i/ is the ith species concentration and α/sub j/ is the jth system parameter. The system parameters include rate constants, diffusion coefficients, initial conditions, boundary conditions, or any other well-defined variables in the kinetic equations. These parameters are generally considered to be functions of position and/or time. Derivation of the governing equations for the sensitivities and the Green's funciton are presented. The physical interpretation of the Green's function and sensitivities is given along with a discussion of the relation of this work to earlier research

  3. MOVES regional level sensitivity analysis

    Science.gov (United States)

    2012-01-01

    The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...

  4. EV range sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ostafew, C. [Azure Dynamics Corp., Toronto, ON (Canada)

    2010-07-01

    This presentation included a sensitivity analysis of electric vehicle components on overall efficiency. The presentation provided an overview of drive cycles and discussed the major contributors to range in terms of rolling resistance; aerodynamic drag; motor efficiency; and vehicle mass. Drive cycles that were presented included: New York City Cycle (NYCC); urban dynamometer drive cycle; and US06. A summary of the findings were presented for each of the major contributors. Rolling resistance was found to have a balanced effect on each drive cycle and proportional to range. In terms of aerodynamic drive, there was a large effect on US06 range. A large effect was also found on NYCC range in terms of motor efficiency and vehicle mass. figs.

  5. Preliminary Context Analysis of Community Informatics Social ...

    African Journals Online (AJOL)

    Preliminary context analysis is always part of the feasibility study phase in the development of information system for Community Development (CD) purposes. In this paper, a context model and a preliminary context analysis are presented for Social Network Web Application (SNWA) for CD in the Niger Delta region of ...

  6. Yucca Mountain transportation routes: Preliminary characterization and risk analysis

    International Nuclear Information System (INIS)

    Souleyrette, R.R. II; Sathisan, S.K.; di Bartolo, R.

    1991-01-01

    This report presents appendices related to the preliminary assessment and risk analysis for high-level radioactive waste transportation routes to the proposed Yucca Mountain Project repository. Information includes data on population density, traffic volume, ecologically sensitive areas, and accident history

  7. Preliminary sensitivity analyses of corrosion models for BWIP [Basalt Waste Isolation Project] container materials

    International Nuclear Information System (INIS)

    Anantatmula, R.P.

    1984-01-01

    A preliminary sensitivity analysis was performed for the corrosion models developed for Basalt Waste Isolation Project container materials. The models describe corrosion behavior of the candidate container materials (low carbon steel and Fe9Cr1Mo), in various environments that are expected in the vicinity of the waste package, by separate equations. The present sensitivity analysis yields an uncertainty in total uniform corrosion on the basis of assumed uncertainties in the parameters comprising the corrosion equations. Based on the sample scenario and the preliminary corrosion models, the uncertainty in total uniform corrosion of low carbon steel and Fe9Cr1Mo for the 1000 yr containment period are 20% and 15%, respectively. For containment periods ≥ 1000 yr, the uncertainty in corrosion during the post-closure aqueous periods controls the uncertainty in total uniform corrosion for both low carbon steel and Fe9Cr1Mo. The key parameters controlling the corrosion behavior of candidate container materials are temperature, radiation, groundwater species, etc. Tests are planned in the Basalt Waste Isolation Project containment materials test program to determine in detail the sensitivity of corrosion to these parameters. We also plan to expand the sensitivity analysis to include sensitivity coefficients and other parameters in future studies. 6 refs., 3 figs., 9 tabs

  8. Original Article PRELIMINARY BIOAUTOGRAPHIC ANALYSIS OF ...

    African Journals Online (AJOL)

    PRELIMINARY BIOAUTOGRAPHIC ANALYSIS OF THE SEEDS OF GLYPHAEA BREVIS. (SPRENG) MONACHINO FOR ANTIOXIDANT AND ANTIBACTERIAL PRINCIPLES. Michael Lahai1, Tiwalade Adewale Olugbade2. 1Department of Pharmaceutical Chemistry, Faculty of Pharmaceutical Sciences, College of Medicine ...

  9. Preliminary Analysis of Reinforced Concrete Waffle Walls

    National Research Council Canada - National Science Library

    Shugar, Theodore

    1997-01-01

    A preliminary analytical method based upon modified plate bending theory is offered for structural analysis of a promising new construction method for walls of small buildings and residential housing...

  10. Hydrothermal Liquefaction Treatment Preliminary Hazard Analysis Report

    Energy Technology Data Exchange (ETDEWEB)

    Lowry, Peter P.; Wagner, Katie A.

    2015-08-31

    A preliminary hazard assessment was completed during February 2015 to evaluate the conceptual design of the modular hydrothermal liquefaction treatment system. The hazard assessment was performed in 2 stages. An initial assessment utilizing Hazard Identification and Preliminary Hazards Analysis (PHA) techniques identified areas with significant or unique hazards (process safety-related hazards) that fall outside of the normal operating envelope of PNNL and warranted additional analysis. The subsequent assessment was based on a qualitative What-If analysis. This analysis was augmented, as necessary, by additional quantitative analysis for scenarios involving a release of hazardous material or energy with the potential for affecting the public.

  11. Maternal sensitivity: a concept analysis.

    Science.gov (United States)

    Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae

    2008-11-01

    The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.

  12. Preliminary comparison with 40 CFR Part 191, Subpart B for the Waste Isolation Pilot Plant, December 1991. Vol. 4: Uncertainty and sensitivity analysis results

    Energy Technology Data Exchange (ETDEWEB)

    Helton, J C [Arizona State University, Department of Mathematics, Tempe, AZ (United States); Garner, J W [Applied Physics Inc., Albuquerque, NM (United States); Rechard, R P; Rudeen, D K [New Mexico Engineering Research Institute, Albuquerque, NM (United States); Swift, P N [Tech Reps Inc., Albuquerque, NM (United States)

    1992-04-15

    The most appropriate conceptual model for performance assessment at the Waste Isolation Pilot Plant (WIPP) is believed to include gas generation due to corrosion and microbial action in the repository and a dual-porosity (matrix and fracture porosity) representation for solute transport in the Culebra Dolomite Member of the Rustler Formation. Under these assumptions, complementary cumulative distribution functions (CCDFs) summarizing radionuclide releases to the accessible environment due to both cuttings removal and groundwater transport fall substantially below the release limits promulgated by the Environmental Protection Agency (EPA). This is the case even when the current estimates of the uncertainty in analysis inputs are incorporated into the performance assessment. The best-estimate performance-assessment results are dominated by cuttings removal. The releases to the accessible environment due to groundwater transport make very small contributions to the total release. The variability in the distribution of CCDFs that must be considered in comparisons with the EPA release limits is dominated by the variable LAMBDA (rate constant in Poisson model for drilling intrusions). The variability in releases to the accessible environment due to individual drilling intrusions is dominated by DBDIAM (drill bit diameter). Most of the imprecisely known variables considered in the 1991 WIPP performance assessment relate to radionuclide releases to the accessible environment due to groundwater transport. For a single borehole (i.e., an E2-type scenario), whether or not a release from the repository to the Culebra even occurs is controlled by the variable SALPERM (Salado permeability), with no releases for small values (i.e., < 5 x 10{sup -21} m{sup 2}) of this variable. When SALPERM is small, the repository never fills with brine and so there is no flow up an intruding borehole that can transport radionuclides to the Culebra. Further, releases that do reach the Culebra for

  13. Preliminary comparison with 40 CFR Part 191, Subpart B for the Waste Isolation Pilot Plant, December 1991. Vol. 4: Uncertainty and sensitivity analysis results

    International Nuclear Information System (INIS)

    Helton, J.C.; Garner, J.W.; Rechard, R.P.; Rudeen, D.K.; Swift, P.N.

    1992-04-01

    The most appropriate conceptual model for performance assessment at the Waste Isolation Pilot Plant (WIPP) is believed to include gas generation due to corrosion and microbial action in the repository and a dual-porosity (matrix and fracture porosity) representation for solute transport in the Culebra Dolomite Member of the Rustler Formation. Under these assumptions, complementary cumulative distribution functions (CCDFs) summarizing radionuclide releases to the accessible environment due to both cuttings removal and groundwater transport fall substantially below the release limits promulgated by the Environmental Protection Agency (EPA). This is the case even when the current estimates of the uncertainty in analysis inputs are incorporated into the performance assessment. The best-estimate performance-assessment results are dominated by cuttings removal. The releases to the accessible environment due to groundwater transport make very small contributions to the total release. The variability in the distribution of CCDFs that must be considered in comparisons with the EPA release limits is dominated by the variable LAMBDA (rate constant in Poisson model for drilling intrusions). The variability in releases to the accessible environment due to individual drilling intrusions is dominated by DBDIAM (drill bit diameter). Most of the imprecisely known variables considered in the 1991 WIPP performance assessment relate to radionuclide releases to the accessible environment due to groundwater transport. For a single borehole (i.e., an E2-type scenario), whether or not a release from the repository to the Culebra even occurs is controlled by the variable SALPERM (Salado permeability), with no releases for small values (i.e., -21 m 2 ) of this variable. When SALPERM is small, the repository never fills with brine and so there is no flow up an intruding borehole that can transport radionuclides to the Culebra. Further, releases that do reach the Culebra for larger values of

  14. Global optimization and sensitivity analysis

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1990-01-01

    A new direction for the analysis of nonlinear models of nuclear systems is suggested to overcome fundamental limitations of sensitivity analysis and optimization methods currently prevalent in nuclear engineering usage. This direction is toward a global analysis of the behavior of the respective system as its design parameters are allowed to vary over their respective design ranges. Presented is a methodology for global analysis that unifies and extends the current scopes of sensitivity analysis and optimization by identifying all the critical points (maxima, minima) and solution bifurcation points together with corresponding sensitivities at any design point of interest. The potential applicability of this methodology is illustrated with test problems involving multiple critical points and bifurcations and comprising both equality and inequality constraints

  15. Preliminary safety analysis methodology for the SMART

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Kyoo Hwan; Chung, Y. J.; Kim, H. C.; Sim, S. K.; Lee, W. J.; Chung, B. D.; Song, J. H. [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-03-01

    This technical report was prepared for a preliminary safety analysis methodology of the 330MWt SMART (System-integrated Modular Advanced ReacTor) which has been developed by Korea Atomic Energy Research Institute (KAERI) and funded by the Ministry of Science and Technology (MOST) since July 1996. This preliminary safety analysis methodology has been used to identify an envelope for the safety of the SMART conceptual design. As the SMART design evolves, further validated final safety analysis methodology will be developed. Current licensing safety analysis methodology of the Westinghouse and KSNPP PWRs operating and under development in Korea as well as the Russian licensing safety analysis methodology for the integral reactors have been reviewed and compared to develop the preliminary SMART safety analysis methodology. SMART design characteristics and safety systems have been reviewed against licensing practices of the PWRs operating or KNGR (Korean Next Generation Reactor) under construction in Korea. Detailed safety analysis methodology has been developed for the potential SMART limiting events of main steam line break, main feedwater pipe break, loss of reactor coolant flow, CEA withdrawal, primary to secondary pipe break and the small break loss of coolant accident. SMART preliminary safety analysis methodology will be further developed and validated in parallel with the safety analysis codes as the SMART design further evolves. Validated safety analysis methodology will be submitted to MOST as a Topical Report for a review of the SMART licensing safety analysis methodology. Thus, it is recommended for the nuclear regulatory authority to establish regulatory guides and criteria for the integral reactor. 22 refs., 18 figs., 16 tabs. (Author)

  16. Preliminary failure mode and effect analysis

    International Nuclear Information System (INIS)

    Addison, J.V.

    1972-01-01

    A preliminary Failure Mode and Effect Analysis (FMEA) was made on the overall 5 Kwe system. A general discussion of the system and failure effect is given in addition to the tabulated FMEA and a primary block diagram of the system. (U.S.)

  17. Preliminary safety analysis report for the TFTR

    International Nuclear Information System (INIS)

    Lind, K.E.; Levine, J.D.; Howe, H.J.

    A Preliminary Safety Analysis Report has been prepared for the Tokamak Fusion Test Reactor. No accident scenarios have been identified which would result in exposures to on-site personnel or the general public in excess of the guidelines defined for the project by DOE

  18. High order depletion sensitivity analysis

    International Nuclear Information System (INIS)

    Naguib, K.; Adib, M.; Morcos, H.N.

    2002-01-01

    A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations

  19. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  20. Sensitivity analysis using probability bounding

    International Nuclear Information System (INIS)

    Ferson, Scott; Troy Tucker, W.

    2006-01-01

    Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values

  1. Licensing Support System: Preliminary data scope analysis

    International Nuclear Information System (INIS)

    1989-01-01

    The purpose of this analysis is to determine the content and scope of the Licensing Support System (LSS) data base. Both user needs and currently available data bases that, at least in part, address those needs have been analyzed. This analysis, together with the Preliminary Needs Analysis (DOE, 1988d) is a first effort under the LSS Design and Implementation Contract toward developing a sound requirements foundation for subsequent design work. These reports are preliminary. Further refinements must be made before requirements can be specified in sufficient detail to provide a basis for suitably specific system specifications. This document provides a baseline for what is known at this time. Additional analyses, currently being conducted, will provide more precise information on the content and scope of the LSS data base. 23 refs., 4 figs., 8 tabs

  2. Sensitivity analysis in remote sensing

    CERN Document Server

    Ustinov, Eugene A

    2015-01-01

    This book contains a detailed presentation of general principles of sensitivity analysis as well as their applications to sample cases of remote sensing experiments. An emphasis is made on applications of adjoint problems, because they are more efficient in many practical cases, although their formulation may seem counterintuitive to a beginner. Special attention is paid to forward problems based on higher-order partial differential equations, where a novel matrix operator approach to formulation of corresponding adjoint problems is presented. Sensitivity analysis (SA) serves for quantitative models of physical objects the same purpose, as differential calculus does for functions. SA provides derivatives of model output parameters (observables) with respect to input parameters. In remote sensing SA provides computer-efficient means to compute the jacobians, matrices of partial derivatives of observables with respect to the geophysical parameters of interest. The jacobians are used to solve corresponding inver...

  3. Preliminary Analysis of Google+'s Privacy

    OpenAIRE

    Mahmood, Shah; Desmedt, Yvo

    2011-01-01

    In this paper we provide a preliminary analysis of Google+ privacy. We identified that Google+ shares photo metadata with users who can access the photograph and discuss its potential impact on privacy. We also identified that Google+ encourages the provision of other names including maiden name, which may help criminals performing identity theft. We show that Facebook lists are a superset of Google+ circles, both functionally and logically, even though Google+ provides a better user interfac...

  4. Sensitivity Analysis of Viscoelastic Structures

    Directory of Open Access Journals (Sweden)

    A.M.G. de Lima

    2006-01-01

    Full Text Available In the context of control of sound and vibration of mechanical systems, the use of viscoelastic materials has been regarded as a convenient strategy in many types of industrial applications. Numerical models based on finite element discretization have been frequently used in the analysis and design of complex structural systems incorporating viscoelastic materials. Such models must account for the typical dependence of the viscoelastic characteristics on operational and environmental parameters, such as frequency and temperature. In many applications, including optimal design and model updating, sensitivity analysis based on numerical models is a very usefull tool. In this paper, the formulation of first-order sensitivity analysis of complex frequency response functions is developed for plates treated with passive constraining damping layers, considering geometrical characteristics, such as the thicknesses of the multi-layer components, as design variables. Also, the sensitivity of the frequency response functions with respect to temperature is introduced. As an example, response derivatives are calculated for a three-layer sandwich plate and the results obtained are compared with first-order finite-difference approximations.

  5. UMTS Common Channel Sensitivity Analysis

    DEFF Research Database (Denmark)

    Pratas, Nuno; Rodrigues, António; Santos, Frederico

    2006-01-01

    and as such it is necessary that both channels be available across the cell radius. This requirement makes the choice of the transmission parameters a fundamental one. This paper presents a sensitivity analysis regarding the transmission parameters of two UMTS common channels: RACH and FACH. Optimization of these channels...... is performed and values for the key transmission parameters in both common channels are obtained. On RACH these parameters are the message to preamble offset, the initial SIR target and the preamble power step while on FACH it is the transmission power offset....

  6. TEMAC, Top Event Sensitivity Analysis

    International Nuclear Information System (INIS)

    Iman, R.L.; Shortencarier, M.J.

    1988-01-01

    1 - Description of program or function: TEMAC is designed to permit the user to easily estimate risk and to perform sensitivity and uncertainty analyses with a Boolean expression such as produced by the SETS computer program. SETS produces a mathematical representation of a fault tree used to model system unavailability. In the terminology of the TEMAC program, such a mathematical representation is referred to as a top event. The analysis of risk involves the estimation of the magnitude of risk, the sensitivity of risk estimates to base event probabilities and initiating event frequencies, and the quantification of the uncertainty in the risk estimates. 2 - Method of solution: Sensitivity and uncertainty analyses associated with top events involve mathematical operations on the corresponding Boolean expression for the top event, as well as repeated evaluations of the top event in a Monte Carlo fashion. TEMAC employs a general matrix approach which provides a convenient general form for Boolean expressions, is computationally efficient, and allows large problems to be analyzed. 3 - Restrictions on the complexity of the problem - Maxima of: 4000 cut sets, 500 events, 500 values in a Monte Carlo sample, 16 characters in an event name. These restrictions are implemented through the FORTRAN 77 PARAMATER statement

  7. Preliminary Hazards Analysis Plasma Hearth Process

    International Nuclear Information System (INIS)

    Aycock, M.; Coordes, D.; Russell, J.; TenBrook, W.; Yimbo, P.

    1993-11-01

    This Preliminary Hazards Analysis (PHA) for the Plasma Hearth Process (PHP) follows the requirements of United States Department of Energy (DOE) Order 5480.23 (DOE, 1992a), DOE Order 5480.21 (DOE, 1991d), DOE Order 5480.22 (DOE, 1992c), DOE Order 5481.1B (DOE, 1986), and the guidance provided in DOE Standards DOE-STD-1027-92 (DOE, 1992b). Consideration is given to ft proposed regulations published as 10 CFR 830 (DOE, 1993) and DOE Safety Guide SG 830.110 (DOE, 1992b). The purpose of performing a PRA is to establish an initial hazard categorization for a DOE nuclear facility and to identify those processes and structures which may have an impact on or be important to safety. The PHA is typically performed during and provides input to project conceptual design. The PRA then is followed by a Preliminary Safety Analysis Report (PSAR) performed during Title I and II design. This PSAR then leads to performance of the Final Safety Analysis Report performed during construction, testing, and acceptance and completed before routine operation. Radiological assessments indicate that a PHP facility, depending on the radioactive material inventory, may be an exempt, Category 3, or Category 2 facility. The calculated impacts would result in no significant impact to offsite personnel or the environment. Hazardous material assessments indicate that a PHP facility will be a Low Hazard facility having no significant impacts either onsite or offsite to personnel and the environment

  8. Carbon dioxide capture processes: Simulation, design and sensitivity analysis

    DEFF Research Database (Denmark)

    Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul

    2012-01-01

    equilibrium and associated property models are used. Simulations are performed to investigate the sensitivity of the process variables to change in the design variables including process inputs and disturbances in the property model parameters. Results of the sensitivity analysis on the steady state...... performance of the process to the L/G ratio to the absorber, CO2 lean solvent loadings, and striper pressure are presented in this paper. Based on the sensitivity analysis process optimization problems have been defined and solved and, a preliminary control structure selection has been made.......Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...

  9. Repository Subsurface Preliminary Fire Hazard Analysis

    International Nuclear Information System (INIS)

    Logan, Richard C.

    2001-01-01

    This fire hazard analysis identifies preliminary design and operations features, fire, and explosion hazards, and provides a reasonable basis to establish the design requirements of fire protection systems during development and emplacement phases of the subsurface repository. This document follows the Technical Work Plan (TWP) (CRWMS M and O 2001c) which was prepared in accordance with AP-2.21Q, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities''; Attachment 4 of AP-ESH-008, ''Hazards Analysis System''; and AP-3.11Q, ''Technical Reports''. The objective of this report is to establish the requirements that provide for facility nuclear safety and a proper level of personnel safety and property protection from the effects of fire and the adverse effects of fire-extinguishing agents

  10. Plasma brake model for preliminary mission analysis

    Science.gov (United States)

    Orsini, Leonardo; Niccolai, Lorenzo; Mengali, Giovanni; Quarta, Alessandro A.

    2018-03-01

    Plasma brake is an innovative propellantless propulsion system concept that exploits the Coulomb collisions between a charged tether and the ions in the surrounding environment (typically, the ionosphere) to generate an electrostatic force orthogonal to the tether direction. Previous studies on the plasma brake effect have emphasized the existence of a number of different parameters necessary to obtain an accurate description of the propulsive acceleration from a physical viewpoint. The aim of this work is to discuss an analytical model capable of estimating, with the accuracy required by a preliminary mission analysis, the performance of a spacecraft equipped with a plasma brake in a (near-circular) low Earth orbit. The simplified mathematical model is first validated through numerical simulations, and is then used to evaluate the plasma brake performance in some typical mission scenarios, in order to quantify the influence of the system parameters on the mission performance index.

  11. Preliminary Shielding Analysis for HCCB TBM Transport

    Science.gov (United States)

    Miao, Peng; Zhao, Fengchao; Cao, Qixiang; Zhang, Guoshu; Feng, Kaiming

    2015-09-01

    A preliminary shielding analysis on the transport of the Chinese helium cooled ceramic breeder test blanket module (HCCB TBM) from France back to China after being irradiated in ITER is presented in this contribution. Emphasis was placed on irradiation safety during transport. The dose rate calculated by MCNP/4C for the conceptual package design satisfies the relevant dose limits from IAEA that the dose rate 3 m away from the surface of the package containing low specific activity III materials should be less than 10 mSv/h. The change with location and the time evolution of dose rates after shutdown have also been studied. This will be helpful for devising the detailed transport plan of HCCB TBM back to China in the near future. supported by the Major State Basic Research Development Program of China (973 Program) (No. 2013GB108000)

  12. Data fusion qualitative sensitivity analysis

    International Nuclear Information System (INIS)

    Clayton, E.A.; Lewis, R.E.

    1995-09-01

    Pacific Northwest Laboratory was tasked with testing, debugging, and refining the Hanford Site data fusion workstation (DFW), with the assistance of Coleman Research Corporation (CRC), before delivering the DFW to the environmental restoration client at the Hanford Site. Data fusion is the mathematical combination (or fusion) of disparate data sets into a single interpretation. The data fusion software used in this study was developed by CRC. The data fusion software developed by CRC was initially demonstrated on a data set collected at the Hanford Site where three types of data were combined. These data were (1) seismic reflection, (2) seismic refraction, and (3) depth to geologic horizons. The fused results included a contour map of the top of a low-permeability horizon. This report discusses the results of a sensitivity analysis of data fusion software to variations in its input parameters. The data fusion software developed by CRC has a large number of input parameters that can be varied by the user and that influence the results of data fusion. Many of these parameters are defined as part of the earth model. The earth model is a series of 3-dimensional polynomials with horizontal spatial coordinates as the independent variables and either subsurface layer depth or values of various properties within these layers (e.g., compression wave velocity, resistivity) as the dependent variables

  13. Sensitivity Measurement of Transmission Computer Tomography: thePreliminary Experimental Study

    International Nuclear Information System (INIS)

    Widodo, Chomsin-S; Sudjatmoko; Kusminarto; Agung-BS Utomo; Suparta, Gede B

    2000-01-01

    This paper reports result of preliminary experimental study onmeasurement method for sensitivity of a computed tomography (CT) scanner. ACT scanner has been build at the Department of Physics, FMIPA UGM and itsperformance based on its sensitivity was measured. The result showed that themeasurement method for sensitivity confirmed this method may be developedfurther as a measurement standard. Although the CT scanner developed has anumber of shortcoming, the analytical results from the sensitivitymeasurement suggest a number of reparations and improvements for the systemso that improved reconstructed CT images can be obtained. (author)

  14. Risk and sensitivity analysis in relation to external events

    International Nuclear Information System (INIS)

    Alzbutas, R.; Urbonas, R.; Augutis, J.

    2001-01-01

    This paper presents risk and sensitivity analysis of external events impacts on the safe operation in general and in particular the Ignalina Nuclear Power Plant safety systems. Analysis is based on the deterministic and probabilistic assumptions and assessment of the external hazards. The real statistic data are used as well as initial external event simulation. The preliminary screening criteria are applied. The analysis of external event impact on the NPP safe operation, assessment of the event occurrence, sensitivity analysis, and recommendations for safety improvements are performed for investigated external hazards. Such events as aircraft crash, extreme rains and winds, forest fire and flying parts of the turbine are analysed. The models are developed and probabilities are calculated. As an example for sensitivity analysis the model of aircraft impact is presented. The sensitivity analysis takes into account the uncertainty features raised by external event and its model. Even in case when the external events analysis show rather limited danger, the sensitivity analysis can determine the highest influence causes. These possible variations in future can be significant for safety level and risk based decisions. Calculations show that external events cannot significantly influence the safety level of the Ignalina NPP operation, however the events occurrence and propagation can be sufficiently uncertain.(author)

  15. Probabilistic sensitivity analysis of biochemical reaction systems.

    Science.gov (United States)

    Zhang, Hong-Xuan; Dempsey, William P; Goutsias, John

    2009-09-07

    Sensitivity analysis is an indispensable tool for studying the robustness and fragility properties of biochemical reaction systems as well as for designing optimal approaches for selective perturbation and intervention. Deterministic sensitivity analysis techniques, using derivatives of the system response, have been extensively used in the literature. However, these techniques suffer from several drawbacks, which must be carefully considered before using them in problems of systems biology. We develop here a probabilistic approach to sensitivity analysis of biochemical reaction systems. The proposed technique employs a biophysically derived model for parameter fluctuations and, by using a recently suggested variance-based approach to sensitivity analysis [Saltelli et al., Chem. Rev. (Washington, D.C.) 105, 2811 (2005)], it leads to a powerful sensitivity analysis methodology for biochemical reaction systems. The approach presented in this paper addresses many problems associated with derivative-based sensitivity analysis techniques. Most importantly, it produces thermodynamically consistent sensitivity analysis results, can easily accommodate appreciable parameter variations, and allows for systematic investigation of high-order interaction effects. By employing a computational model of the mitogen-activated protein kinase signaling cascade, we demonstrate that our approach is well suited for sensitivity analysis of biochemical reaction systems and can produce a wealth of information about the sensitivity properties of such systems. The price to be paid, however, is a substantial increase in computational complexity over derivative-based techniques, which must be effectively addressed in order to make the proposed approach to sensitivity analysis more practical.

  16. Sensitivity Analysis of a Physiochemical Interaction Model ...

    African Journals Online (AJOL)

    In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 J. Appl. Sci. Environ. Manage. June, 2012, Vol.

  17. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    Science.gov (United States)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  18. Activation analysis by filtered neutrons. Preliminary investigation

    International Nuclear Information System (INIS)

    Skarnemark, G.; Rodinson, T.; Skaalberg, M.; Tokay, R.K.

    1986-01-01

    In order to investigate if measuring sensibility and precision by epithermal neutron activation analysis may be improved, different types of geological and biologic test samples were radiated. The test samples were enclosed in an extra filter of tungsten or sodium in order to reduce the flux of those neutrons that otherwise would induce interfering activity in the sample. The geological test samples consist of granites containing lanthanides which had been crushed in tung- sten carbide grinder. Normally such test samples show a interferins 1 87W-activity. By use of a tungsten filter the activity was reduced by up to 60%, which resulted in a considerable improvement of sensibility and precision of the measurement. The biologic test samples consisted of evaporated urine from patients treated with the cell poison cis-platinol. A reliable method to measure the platinum content has not existed so far. This method, however, enables platinum contents as low as about 0.1 ppm to be determined which is quite adequate. To sum up this preliminary study has demonstrated that activation analysis using filtered neutrons, correctly applied, is a satisfactory method of reducing interferences without complicated and time-consuming chemical separation procedures. (O.S.)

  19. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  20. Preliminary hazard analysis using sequence tree method

    International Nuclear Information System (INIS)

    Huang Huiwen; Shih Chunkuan; Hung Hungchih; Chen Minghuei; Yih Swu; Lin Jiinming

    2007-01-01

    A system level PHA using sequence tree method was developed to perform Safety Related digital I and C system SSA. The conventional PHA is a brainstorming session among experts on various portions of the system to identify hazards through discussions. However, this conventional PHA is not a systematic technique, the analysis results strongly depend on the experts' subjective opinions. The analysis quality cannot be appropriately controlled. Thereby, this research developed a system level sequence tree based PHA, which can clarify the relationship among the major digital I and C systems. Two major phases are included in this sequence tree based technique. The first phase uses a table to analyze each event in SAR Chapter 15 for a specific safety related I and C system, such as RPS. The second phase uses sequence tree to recognize what I and C systems are involved in the event, how the safety related systems work, and how the backup systems can be activated to mitigate the consequence if the primary safety systems fail. In the sequence tree, the defense-in-depth echelons, including Control echelon, Reactor trip echelon, ESFAS echelon, and Indication and display echelon, are arranged to construct the sequence tree structure. All the related I and C systems, include digital system and the analog back-up systems are allocated in their specific echelon. By this system centric sequence tree based analysis, not only preliminary hazard can be identified systematically, the vulnerability of the nuclear power plant can also be recognized. Therefore, an effective simplified D3 evaluation can be performed as well. (author)

  1. Risk Characterization uncertainties associated description, sensitivity analysis

    International Nuclear Information System (INIS)

    Carrillo, M.; Tovar, M.; Alvarez, J.; Arraez, M.; Hordziejewicz, I.; Loreto, I.

    2013-01-01

    The power point presentation is about risks to the estimated levels of exposure, uncertainty and variability in the analysis, sensitivity analysis, risks from exposure to multiple substances, formulation of guidelines for carcinogenic and genotoxic compounds and risk subpopulations

  2. Object-sensitive Type Analysis of PHP

    NARCIS (Netherlands)

    Van der Hoek, Henk Erik; Hage, J

    2015-01-01

    In this paper we develop an object-sensitive type analysis for PHP, based on an extension of the notion of monotone frameworks to deal with the dynamic aspects of PHP, and following the framework of Smaragdakis et al. for object-sensitive analysis. We consider a number of instantiations of the

  3. Preliminary Analysis and Selection of Mooring Solution Candidates

    DEFF Research Database (Denmark)

    Thomsen, Jonas Bjerg; Delaney, Martin

    This report covers a preliminary analysis of mooring solutions candidates for four large floating wave energy converters. The work is part of the EUDP project “Mooring Solutions for Large Wave Energy Converters” and is the outcome of "Work Package 3: Preliminary Analysis". The report further...... compose the "Milestone 4: Report on results of preliminary analysis and selection of final candidates. The report is produced by Aalborg University with input from the partner WECs Floating Power Plant, KNSwing, LEANCON and Wave Dragon. Tension Technology International (TTI) has provided a significant...

  4. Preliminary safety analysis of the Gorleben site

    International Nuclear Information System (INIS)

    Bracke, G.; Fischer-Appelt, K.

    2014-01-01

    The safety requirements governing the final disposal of heat-generating radioactive waste in Germany were implemented by the Federal Ministry of Environment, Natural Conservation and Nuclear Safety (BMU) in 2010. The Ministry considers as a fundamental objective the protection of man and the environment against the hazards of radioactive waste. Unreasonable burdens and obligation for future generations shall be avoided. The main safety principles are concentration and inclusion of radioactive and other pollutants in a containment-providing rock zone. Any release of radioactive nuclides may increase the risk for men and the environment only negligibly compared to natural radiation exposure. No intervention or maintenance work shall be necessary in the post-closure phase. Retrieval/recovery of the waste shall be possible up to 500 years after closure. The Gorleben salt dome has been discussed since the 1970's as a possible repository site for heat-generating radioactive waste in Germany. The objective of the project preliminary safety analysis of the Gorleben site (VSG) was to assess if repository concepts at the Gorleben site or other sites with a comparable geology could comply with these requirements based on currently available knowledge (Fischer-Appelt, 2013; Bracke, 2013). In addition to this it was assessed if methodological approaches can be used for a future site selection procedure and which technological and conceptual considerations can be transferred to other geological situations. The objective included the compilation and review of the available exploration data of the Gorleben site and on disposal in salt rock, the development of repository designs, and the identification of the needs for future R and D work and further site investigations. (authors)

  5. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    Science.gov (United States)

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  6. A hybrid approach for global sensitivity analysis

    International Nuclear Information System (INIS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2017-01-01

    Distribution based sensitivity analysis (DSA) computes sensitivity of the input random variables with respect to the change in distribution of output response. Although DSA is widely appreciated as the best tool for sensitivity analysis, the computational issue associated with this method prohibits its use for complex structures involving costly finite element analysis. For addressing this issue, this paper presents a method that couples polynomial correlated function expansion (PCFE) with DSA. PCFE is a fully equivalent operational model which integrates the concepts of analysis of variance decomposition, extended bases and homotopy algorithm. By integrating PCFE into DSA, it is possible to considerably alleviate the computational burden. Three examples are presented to demonstrate the performance of the proposed approach for sensitivity analysis. For all the problems, proposed approach yields excellent results with significantly reduced computational effort. The results obtained, to some extent, indicate that proposed approach can be utilized for sensitivity analysis of large scale structures. - Highlights: • A hybrid approach for global sensitivity analysis is proposed. • Proposed approach integrates PCFE within distribution based sensitivity analysis. • Proposed approach is highly efficient.

  7. Sensitivity analysis of a PWR pressurizer

    International Nuclear Information System (INIS)

    Bruel, Renata Nunes

    1997-01-01

    A sensitivity analysis relative to the parameters and modelling of the physical process in a PWR pressurizer has been performed. The sensitivity analysis was developed by implementing the key parameters and theoretical model lings which generated a comprehensive matrix of influences of each changes analysed. The major influences that have been observed were the flashing phenomenon and the steam condensation on the spray drops. The present analysis is also applicable to the several theoretical and experimental areas. (author)

  8. The Scrap Tire Problem: A Preliminary Economic Analysis (1985)

    Science.gov (United States)

    The purpose of the study was to conduct a preliminary economic analysis of the social benefits of EPA action to require more appropriate disposal of scrap tires versus the social costs of such an action.

  9. Original Article PRELIMINARY BIOAUTOGRAPHIC ANALYSIS OF ...

    African Journals Online (AJOL)

    Sierra Leone 2Department of Pharmaceutical Chemistry, Faculty of Pharmacy, ... the seeds are used in the treatment of skin infections. ... Screening with DPPH showed prominent antioxidant spots on silica at Rf 0.8, 0.5, 0.4 .... underpins conditions like rheumatoid arthritis, ..... As a follow-up to the preliminary TLC studies.

  10. Preliminary analysis of public dose from CFETR gaseous tritium release

    Energy Technology Data Exchange (ETDEWEB)

    Nie, Baojie [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui 230031 (China); University of Science and Technology of China, Hefei, Anhui 230027 (China); Ni, Muyi, E-mail: muyi.ni@fds.org.cn [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui 230031 (China); Lian, Chao; Jiang, Jieqiong [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui 230031 (China)

    2015-02-15

    Highlights: • Present the amounts and limit dose of tritium release to the environment for CFETR. • Perform a preliminary simulation of radiation dose for gaseous tritium release. • Key parameters about soil types, wind speed, stability class, effective release height and age were sensitivity analyzed. • Tritium release amount is recalculated consistently with dose limit in Chinese regulation for CFETR. - Abstract: To demonstrate tritium self-sufficiency and other engineering issues, the scientific conception of Chinese Fusion Engineering Test Reactor (CFETR) has been proposed in China parallel with ITER and before DEMO reactor. Tritium environmental safety for CFETR is an important issue and must be evaluated because of the huge amounts of tritium cycling in reactor. In this work, different tritium release scenarios of CFETR and dose limit regulations in China are introduced. And the public dose is preliminarily analyzed under normal and accidental events. Furthermore, after finishing the sensitivity analysis of key input parameters, the public dose is reevaluated based on extreme parameters. Finally, tritium release amount is recalculated consistently with the dose limit in Chinese regulation for CFETR, which would provide a reference for tritium system design of CFETR.

  11. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  12. Sensitivity analysis in life cycle assessment

    NARCIS (Netherlands)

    Groen, E.A.; Heijungs, R.; Bokkers, E.A.M.; Boer, de I.J.M.

    2014-01-01

    Life cycle assessments require many input parameters and many of these parameters are uncertain; therefore, a sensitivity analysis is an essential part of the final interpretation. The aim of this study is to compare seven sensitivity methods applied to three types of case stud-ies. Two

  13. Ethical sensitivity in professional practice: concept analysis.

    Science.gov (United States)

    Weaver, Kathryn; Morse, Janice; Mitcham, Carl

    2008-06-01

    This paper is a report of a concept analysis of ethical sensitivity. Ethical sensitivity enables nurses and other professionals to respond morally to the suffering and vulnerability of those receiving professional care and services. Because of its significance to nursing and other professional practices, ethical sensitivity deserves more focused analysis. A criteria-based method oriented toward pragmatic utility guided the analysis of 200 papers and books from the fields of nursing, medicine, psychology, dentistry, clinical ethics, theology, education, law, accounting or business, journalism, philosophy, political and social sciences and women's studies. This literature spanned 1970 to 2006 and was sorted by discipline and concept dimensions and examined for concept structure and use across various contexts. The analysis was completed in September 2007. Ethical sensitivity in professional practice develops in contexts of uncertainty, client suffering and vulnerability, and through relationships characterized by receptivity, responsiveness and courage on the part of professionals. Essential attributes of ethical sensitivity are identified as moral perception, affectivity and dividing loyalties. Outcomes include integrity preserving decision-making, comfort and well-being, learning and professional transcendence. Our findings promote ethical sensitivity as a type of practical wisdom that pursues client comfort and professional satisfaction with care delivery. The analysis and resulting model offers an inclusive view of ethical sensitivity that addresses some of the limitations with prior conceptualizations.

  14. LBLOCA sensitivity analysis using meta models

    International Nuclear Information System (INIS)

    Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.

    2014-01-01

    This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)

  15. Preliminary Sensitivity Study on Gas-Cooled Reactor for NHDD System Using MARS-GCR

    International Nuclear Information System (INIS)

    Lee, Seung Wook; Jeong, Jae Jun; Lee, Won Jae

    2005-01-01

    A Gas-Cooled Reactor (GCR) is considered as one of the most outstanding tools for a massive hydrogen production without CO 2 emission. Till now, two types of GCR are regarded as a viable nuclear reactor for a hydrogen production: Prismatic Modular Reactor (PMR), Pebble Bed Reactor (PBR). In this paper, a preliminary sensitivity study on two types of GCR is carried out by using MARS-GCR to find out the effect on the peak fuel and reactor pressure vessel (RPV) temperature, with varying the condition of a reactor inlet, outlet temperature, and system pressure for both PMR and PBR

  16. Sensitivity analysis in optimization and reliability problems

    International Nuclear Information System (INIS)

    Castillo, Enrique; Minguez, Roberto; Castillo, Carmen

    2008-01-01

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods

  17. Sensitivity analysis in optimization and reliability problems

    Energy Technology Data Exchange (ETDEWEB)

    Castillo, Enrique [Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. Castros s/n., 39005 Santander (Spain)], E-mail: castie@unican.es; Minguez, Roberto [Department of Applied Mathematics, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: roberto.minguez@uclm.es; Castillo, Carmen [Department of Civil Engineering, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: mariacarmen.castillo@uclm.es

    2008-12-15

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods.

  18. Techniques for sensitivity analysis of SYVAC results

    International Nuclear Information System (INIS)

    Prust, J.O.

    1985-05-01

    Sensitivity analysis techniques may be required to examine the sensitivity of SYVAC model predictions to the input parameter values, the subjective probability distributions assigned to the input parameters and to the relationship between dose and the probability of fatal cancers plus serious hereditary disease in the first two generations of offspring of a member of the critical group. This report mainly considers techniques for determining the sensitivity of dose and risk to the variable input parameters. The performance of a sensitivity analysis technique may be improved by decomposing the model and data into subsets for analysis, making use of existing information on sensitivity and concentrating sampling in regions the parameter space that generates high doses or risks. A number of sensitivity analysis techniques are reviewed for their application to the SYVAC model including four techniques tested in an earlier study by CAP Scientific for the SYVAC project. This report recommends the development now of a method for evaluating the derivative of dose and parameter value and extending the Kruskal-Wallis technique to test for interactions between parameters. It is also recommended that the sensitivity of the output of each sub-model of SYVAC to input parameter values should be examined. (author)

  19. Life cycle analysis in preliminary design stages

    OpenAIRE

    Agudelo , Lina-Maria; Mejía-Gutiérrez , Ricardo; Nadeau , Jean-Pierre; PAILHES , Jérôme

    2014-01-01

    International audience; In a design process the product is decomposed into systems along the disciplinary lines. Each stage has its own goals and constraints that must be satisfied and has control over a subset of design variables that describe the overall system. When using different tools to initiate a product life cycle, including the environment and impacts, its noticeable that there is a gap in tools that linked the stages of preliminary design and the stages of materialization. Differen...

  20. Multiple predictor smoothing methods for sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  1. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  2. Review of Preliminary Analysis Techniques for Tension Structures.

    Science.gov (United States)

    1984-02-01

    however,a linear dinamic analysis can be conducted for purposes of preliminary design, relative to the static configuration. It is noted that the amount of...16 Chapter 3. PRELIMINARY DESIGN OF TENSION STRUCTURES . . .. .. .. .... 22 S.3.1 Cable Systems . . . . . . . . . . . . .. .. .. .... 23...3.1.1 Singly-Connected Segments. .. .... ... 24 3.1.2 Multiply-Connected Segments . . .. .. .. .. 27 3.1.3 Linearized Dynamics of Cable Systems . . . . 29

  3. Dynamic Resonance Sensitivity Analysis in Wind Farms

    DEFF Research Database (Denmark)

    Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei

    2017-01-01

    (PFs) are calculated by critical eigenvalue sensitivity analysis versus the entries of the MIMO matrix. The PF analysis locates the most exciting bus of the resonances, where can be the best location to install the passive or active filters to reduce the harmonic resonance problems. Time...

  4. Sensitivity functions for uncertainty analysis: Sensitivity and uncertainty analysis of reactor performance parameters

    International Nuclear Information System (INIS)

    Greenspan, E.

    1982-01-01

    This chapter presents the mathematical basis for sensitivity functions, discusses their physical meaning and information they contain, and clarifies a number of issues concerning their application, including the definition of group sensitivities, the selection of sensitivity functions to be included in the analysis, and limitations of sensitivity theory. Examines the theoretical foundation; criticality reset sensitivities; group sensitivities and uncertainties; selection of sensitivities included in the analysis; and other uses and limitations of sensitivity functions. Gives the theoretical formulation of sensitivity functions pertaining to ''as-built'' designs for performance parameters of the form of ratios of linear flux functionals (such as reaction-rate ratios), linear adjoint functionals, bilinear functions (such as reactivity worth ratios), and for reactor reactivity. Offers a consistent procedure for reducing energy-dependent or fine-group sensitivities and uncertainties to broad group sensitivities and uncertainties. Provides illustrations of sensitivity functions as well as references to available compilations of such functions and of total sensitivities. Indicates limitations of sensitivity theory originating from the fact that this theory is based on a first-order perturbation theory

  5. Preliminary evidence for a change in spectral sensitivity of the circadian system at night

    Directory of Open Access Journals (Sweden)

    Parsons Robert H

    2005-12-01

    Full Text Available Abstract Background It is well established that the absolute sensitivity of the suprachiasmatic nucleus to photic stimulation received through the retino-hypothalamic tract changes throughout the 24-hour day. It is also believed that a combination of classical photoreceptors (rods and cones and melanopsin-containing retinal ganglion cells participate in circadian phototransduction, with a spectral sensitivity peaking between 440 and 500 nm. It is still unknown, however, whether the spectral sensitivity of the circadian system also changes throughout the solar day. Reported here is a new study that was designed to determine whether the spectral sensitivity of the circadian retinal phototransduction mechanism, measured through melatonin suppression and iris constriction, varies at night. Methods Human adult males were exposed to a high-pressure mercury lamp [450 lux (170 μW/cm2 at the cornea] and an array of blue light emitting diodes [18 lux (29 μW/cm2 at the cornea] during two nighttime experimental sessions. Both melatonin suppression and iris constriction were measured during and after a one-hour light exposure just after midnight and just before dawn. Results An increase in the percentage of melatonin suppression and an increase in pupil constriction for the mercury source relative to the blue light source at night were found, suggesting a temporal change in the contribution of photoreceptor mechanisms leading to melatonin suppression and, possibly, iris constriction by light in humans. Conclusion The preliminary data presented here suggest a change in the spectral sensitivity of circadian phototransduction mechanisms at two different times of the night. These findings are hypothesized to be the result of a change in the sensitivity of the melanopsin-expressing retinal ganglion cells to light during the night.

  6. Probabilistic sensitivity analysis in health economics.

    Science.gov (United States)

    Baio, Gianluca; Dawid, A Philip

    2015-12-01

    Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. © The Author(s) 2011.

  7. TOLERANCE SENSITIVITY ANALYSIS: THIRTY YEARS LATER

    Directory of Open Access Journals (Sweden)

    Richard E. Wendell

    2010-12-01

    Full Text Available Tolerance sensitivity analysis was conceived in 1980 as a pragmatic approach to effectively characterize a parametric region over which objective function coefficients and right-hand-side terms in linear programming could vary simultaneously and independently while maintaining the same optimal basis. As originally proposed, the tolerance region corresponds to the maximum percentage by which coefficients or terms could vary from their estimated values. Over the last thirty years the original results have been extended in a number of ways and applied in a variety of applications. This paper is a critical review of tolerance sensitivity analysis, including extensions and applications.

  8. Preliminary spatial analysis of combined BATSE/Ulysses gamma-ray burst locations

    International Nuclear Information System (INIS)

    Kippen, R. Marc; Hurley, Kevin; Pendleton, Geoffrey N.

    1998-01-01

    We present the preliminary spatial analysis of 278 bursts that have been localized by BATSE and the two-spacecraft Compton/Ulysses Interplanetary Network. The large number and superior accuracy of the combined BATSE/Ulysses locations provides improved sensitivity to small-angle source properties. We find that the locations are consistent with large- and small-scale isotropy, with no significant small-angle clustering. We constrain the fraction of sources in clusters and discuss the implications for burst repetition

  9. Preliminary ATWS analysis for the IRIS PRA

    International Nuclear Information System (INIS)

    Maddalena Barra; Marco S Ghisu; David J Finnicum; Luca Oriani

    2005-01-01

    Full text of publication follows: The pressurized light water cooled, medium power (1000 MWt) IRIS (International Reactor Innovative and Secure) has been under development for four years by an international consortium of over 21 organizations from ten countries. The plant conceptual design was completed in 2001 and the preliminary design is nearing completion. The pre-application licensing process with NRC started in October, 2002. IRIS has been primarily focused on establishing a design with innovative safety characteristics. The first line of defense in IRIS is to eliminate event initiators that could potentially lead to core damage. In IRIS, this concept is implemented through the 'safety by design' approach, which allows to minimize the number and complexity of the safety systems and required operator actions. The end result is a design with significantly reduced complexity and improved operability, and extensive plant simplifications to enhance construction. To support the optimization of the plant design and confirm the effectiveness of the safety by design approach in mitigating or eliminating events and thus providing a significant reduction in the probability of severe accidents, the PRA is being used as an integral part of the design process. A preliminary but extensive Level 1 PRA model has been developed to support the pre-application licensing of the IRIS design. As a result of the Preliminary IRIS PRA, an optimization of the design from a reliability point of view was completed, and an extremely low (about 1.2 E -8 ) core damage frequency (CDF) was assessed to confirm the impact of the safety by design approach. This first assessment is a result of a PRA model including internal initiating events. During this assessment, several assumptions were necessary to complete the CDF evaluation. In particular Anticipated Transients Without Scram (ATWS) were not included in this initial assessment, because their contribution to core damage frequency was assumed

  10. 242Pu: Preliminary evaluation with consideration of 240Pu, and some sensitivity results

    International Nuclear Information System (INIS)

    Jary, J.; Lagrange, C.; Philis, C.

    1978-01-01

    A preliminary evaluation of 242 Pu nuclear data is presented for the neutron energy range from 10 keV to 20 MeV. The fission cross section is based upon recent experimental measurements on 242 Pu. The remaining cross sections have been calculated using various nuclear models with parameters obtained mainly by both fits on 240 Pu experimental data and general reflexions on the actinides. Particular care has been taken of the direct interactions. The laws of secondary neutron energy spectra and the average number of neutrons produced per fission have been evaluated. The results have been placed in ENDF/BIV format and combined with the low energy region of ENDF/BIV MAT = 1161 data to make complete the evaluation over the whole energy range 10 -5 eV - 20 MeV. Finally, the sensitivities of some of these nuclear data available for reactor calculations are given in terms of the variation of the calculated critical masses

  11. Effects of sensitive electrical stimulation based cueing in Parkinson's disease: a preliminary study

    Directory of Open Access Journals (Sweden)

    Benoît Sijobert

    2016-06-01

    Full Text Available This study aims to investigate the effect of a sensitive cueing on Freezing of Gait (FOG and gait disorders in subjects suffering from Parkinson’s disease (PD. 13 participants with Parkinson’s disease were equipped with an electrical stimulator and a foot mounted inertial measurement unit (IMU. An IMU based algorithm triggered in real time an electrical stimulus applied on the arch of foot at heel off detection. Starting from standing, subjects were asked to walk at their preferred speed on a path comprising 5m straight, u-turn and walk around tasks. Cueing globally decreased the time to achieve the different tasks in all the subjects. In “freezer” subjects, the time to complete the entire path was reduced by 19%. FOG events occurrence was lowered by 12% compared to baseline before and after cueing. This preliminary work showed a positive global effect of an electrical stimulation based cueing on gait and FOG in PD.

  12. Sensitivity Analysis of Centralized Dynamic Cell Selection

    DEFF Research Database (Denmark)

    Lopez, Victor Fernandez; Alvarez, Beatriz Soret; Pedersen, Klaus I.

    2016-01-01

    and a suboptimal optimization algorithm that nearly achieves the performance of the optimal Hungarian assignment. Moreover, an exhaustive sensitivity analysis with different network and traffic configurations is carried out in order to understand what conditions are more appropriate for the use of the proposed...

  13. Sensitivity analysis in a structural reliability context

    International Nuclear Information System (INIS)

    Lemaitre, Paul

    2014-01-01

    This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored. The proposed methods are then applied on the CWNR case, which motivates this thesis. (author)

  14. Applications of advances in nonlinear sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Werbos, P J

    1982-01-01

    The following paper summarizes the major properties and applications of a collection of algorithms involving differentiation and optimization at minimum cost. The areas of application include the sensitivity analysis of models, new work in statistical or econometric estimation, optimization, artificial intelligence and neuron modelling.

  15. *Corresponding Author Sensitivity Analysis of a Physiochemical ...

    African Journals Online (AJOL)

    Michael Horsfall

    The numerical method of sensitivity or the principle of parsimony ... analysis is a widely applied numerical method often being used in the .... Chemical Engineering Journal 128(2-3), 85-93. Amod S ... coupled 3-PG and soil organic matter.

  16. Preliminary safety design analysis of KALIMER

    Energy Technology Data Exchange (ETDEWEB)

    Suk, Soo Dong; Kwon, Y. M.; Kim, K. D. [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-03-01

    The national long-term R and D program updated in 1997 requires Korea Atomic Energy Research Institute(KAERI) to complete by the year 2006 the basic design of Korea Advanced Liquid Metal Reactor (KALIMER), along with supporting R and D work, with the capability of resolving the issue of spent fuel storage as well as with significantly enhanced safety. KALIMER is a 150 MWe pool-type sodium cooled prototype reactor that uses metallic fuel. The conceptual design is currently under way to establish a self consistent design meeting a set of the major safety design requirements for accident prevention. Some of current emphasis include those for inherent and passive means of negative reactivity insertion and decay heat removal, high shutdown reliability, prevention of and protection from sodium chemical reaction, and high seismic margin, among others. All of these requirements affect the reactor design significantly and involve supporting R and D programs of substance. This document first introduces a set of safety design requirements and accident evaluation criteria established for the conceptual design of KALIMER and then summarizes some of the preliminary results of engineering and design analyses performed for the safety of KALIMER. 19 refs., 19 figs., 6 tabs. (Author)

  17. Preliminary shielding analysis of VHTR reactors

    International Nuclear Information System (INIS)

    Flaspoehler, Timothy M.; Petrovic, Bojan

    2011-01-01

    Over the last 20 years a number of methods have been established for automated variance reduction in Monte Carlo shielding simulations. Hybrid methods rely on deterministic adjoint and/or forward calculations to generate these parameters. In the present study, we use the FWCADIS method implemented in MAVRIC sequence of the SCALE6 package to perform preliminary shielding analyses of a VHTR reactor. MAVRIC has been successfully used by a number of researchers for a range of shielding applications, including modeling of LWRs, spent fuel storage, radiation field throughout a nuclear power plant, study of irradiation facilities, and others. However, experience in using MAVRIC for shielding studies of VHTRs is more limited. Thus, the objective of this work is to contribute toward validating MAVRIC for such applications, and identify areas for potential improvement. A simplified model of a prismatic VHTR has been devised, based on general features of the 600 MWt reactor considered as one of the NGNP options. Fuel elements have been homogenized, and the core region is represented as an annulus. However, the overall mix of materials and the relatively large dimensions of the spatial domain challenging the shielding simulations have been preserved. Simulations are performed to evaluate fast neutron fluence, dpa, and other parameters of interest at relevant positions. The paper will investigate and discuss both the effectiveness of the automated variance reduction, as well as applicability of physics model from the standpoint of specific VHTR features. (author)

  18. Reproduction of the Yucca Mountain Project TSPA-LA Uncertainty and Sensitivity Analyses and Preliminary Upgrade of Models

    Energy Technology Data Exchange (ETDEWEB)

    Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis; Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis

    2016-09-01

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.

  19. Preliminary analysis of B. E. C. I

    Energy Technology Data Exchange (ETDEWEB)

    Sugimoto, H; Sato, S [Waseda Univ., Tokyo (Japan). Science and Engineering Research Lab.; Saito, T; Noma, M; Matsubayashi, T

    1974-10-01

    An emulsion chamber (B.E.C.I.) with a generating layer, mounted on a baloon, was flown as preliminary experiment in May 1973. The object of this experiment was (1) the observation of high energy cosmic ray, (2) study of ultra-high energy multiple generation phenomenon, and (3) study of ultra-high energy heavy ion nuclear reaction. The emulsion chamber comprises three portions. Upper portion is 130 sheets arranged vertically at 3 mm intervals, each sheet is a 1,500 ..mu..m methacrylic base coated on one side with 200 ..mu..m emulsion. Middle portion comprises horizontally arranged 800 ..mu..m methacrylic bases coated on both sides with 50 ..mu..m emulsion, and a 1 mm methacrylic sheet is inserted every five bases. Lower portion comprises first five layers of the sandwich of 1 mm lead sheet and 800 ..mu..m methacrylic base coated on both sides with 50 ..mu..m emulsion and second ten layers of the sandwich of 2 mm lead sheet, 800 ..mu..m methacrylic base coated on one side with 50 ..mu..m emulsion, and X-ray film of N type. The cascade having energy of Esub(o)>400 GeV as the scanning efficiency of lower E.C.C., the events having incidental energy of Esub(o)>3TeV among the jets occured in lower E.C.C., and the events having incidental energy of Esub(o)>10TeV among the jets occured in generating layer have been observed. Angle distribution of the secondary charged particles of jets produced in the generating layer can be obtained accurately.

  20. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  1. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  2. Sensitivity analysis and related analysis : A survey of statistical techniques

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical

  3. Practical Recommendations for the Preliminary Design Analysis of ...

    African Journals Online (AJOL)

    Interior-to-exterior shear ratios for equal and unequal bay frames, as well as column inflection points were obtained to serve as practical aids for preliminary analysis/design of fixed-feet multistory sway frames. Equal and unequal bay five story frames were analysed to show the validity of the recommended design ...

  4. Preliminary analysis of patent trends for magnetic fusion technology

    International Nuclear Information System (INIS)

    Levine, L.O.; Ashton, W.B.; Campbell, R.S.

    1984-02-01

    This study presents a preliminary analysis of development trends in magnetic fusion technology based on data from US patents. The research is limited to identification and description of general patent activity and ownership characteristics for 373 patents. The results suggest that more detailed studies of fusion patents could provide useful R and D planning information

  5. Licensing support system preliminary needs analysis: Volume 1

    International Nuclear Information System (INIS)

    1989-01-01

    This Preliminary Needs Analysis, together with the Preliminary Data Scope Analysis (next in this series of reports), is a first effort under the LSS Design and Implementation Contract toward developing a sound requirements foundation for subsequent design work. Further refinements must be made before requirements can be specified in sufficient detail to provide a basis for suitably specific system specifications. This preliminary analysis of the LSS requirements has been divided into a ''needs'' and a ''data scope'' portion only for project management and scheduling reasons. The Preliminary Data Scope Analysis will address all issues concerning the content and size of the LSS data base; providing the requirements basis for data acquisition, cataloging and storage sizing specifications. This report addresses all other requirements for the LSS. The LSS consists of both computer subsystems and non-computer archives. This study addresses only the computer subsystems, focusing on the Access Subsystems. After providing background on previous LSS-related work, this report summarizes the findings from previous examinations of needs and describes a number of other requirements that have an impact on the LSS. The results of interviews conducted for this report are then described and analyzed. The final section of the report brings all of the key findings together and describes how these needs analyses will continue to be refined and utilized in on-going design activities. 14 refs., 2 figs., 1 tab

  6. Preliminary thermal and stress analysis of the SINQ window

    International Nuclear Information System (INIS)

    Heidenreich, G.

    1991-01-01

    Preliminary results of a finite element analysis for the SINQ proton beam window are presented. Temperatures and stresses are calculated in an axisymmetric model. As a result of these calculations, the H 2 O-cooled window (safety window) could be redesigned in such a way that plastic deformation resulting from excessive stress in some areas is avoided. (author)

  7. Preliminary Integrated Safety Analysis Status Report

    International Nuclear Information System (INIS)

    Gwyn, D.

    2001-01-01

    This report provides the status of the potential Monitored Geologic Repository (MGR) Integrated Safety Analysis (EA) by identifying the initial work scope scheduled for completion during the ISA development period, the schedules associated with the tasks identified, safety analysis issues encountered, and a summary of accomplishments during the reporting period. This status covers the period from October 1, 2000 through March 30, 2001

  8. Preliminary analysis of alternative fuel cycles for proliferation evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Steindler, M. J.; Ripfel, H. C.F.; Rainey, R. H.

    1977-01-01

    The ERDA Division of Nuclear Research and Applications proposed 67 nuclear fuel cycles for assessment as to their nonproliferation potential. The object of the assessment was to determine which fuel cycles pose inherently low risk for nuclear weapon proliferation while retaining the major benefits of nuclear energy. This report is a preliminary analysis of these fuel cycles to develop the fuel-recycle data that will complement reactor data, environmental data, and political considerations, which must be included in the overall evaluation. This report presents the preliminary evaluations from ANL, HEDL, ORNL, and SRL and is the basis for a continuing in-depth study. (DLC)

  9. antibacterial properties and preliminary phytochemical analysis

    African Journals Online (AJOL)

    DR. AMINU

    2Department of Pharmaceutical Chemistry, Faculty of Pharmacy, University of Benin, Benin City. *Correspondence ... phytochemical analysis of the dried leaves extracts revealed the presence of alkaloids, ... for the synthesis of useful drugs.

  10. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  11. Global sensitivity analysis by polynomial dimensional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2011-07-15

    This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.

  12. Preliminary conceptual design and analysis on KALIMER reactor structures

    International Nuclear Information System (INIS)

    Kim, Jong Bum

    1996-10-01

    The objectives of this study are to perform preliminary conceptual design and structural analyses for KALIMER (Korea Advanced Liquid Metal Reactor) reactor structures to assess the design feasibility and to identify detailed analysis requirements. KALIMER thermal hydraulic system analysis results and neutronic analysis results are not available at present, only-limited preliminary structural analyses have been performed with the assumptions on the thermal loads. The responses of reactor vessel and reactor internal structures were based on the temperature difference of core inlet and outlet and on engineering judgments. Thermal stresses from the assumed temperatures were calculated using ANSYS code through parametric finite element heat transfer and elastic stress analyses. While, based on the results of preliminary conceptual design and structural analyses, the ASME Code limits for the reactor structures were satisfied for the pressure boundary, the needs for inelastic analyses were indicated for evaluation of design adequacy of the support barrel and the thermal liner. To reduce thermal striping effects in the bottom are of UIS due to up-flowing sodium form reactor core, installation of Inconel-718 liner to the bottom area was proposed, and to mitigate thermal shock loads, additional stainless steel liner was also suggested. The design feasibilities of these were validated through simplified preliminary analyses. In conceptual design phase, the implementation of these results will be made for the design of the reactor structures and the reactor internal structures in conjunction with the thermal hydraulic, neutronic, and seismic analyses results. 4 tabs., 24 figs., 4 refs. (Author)

  13. Experience with PET FDG - Preliminary analysis

    International Nuclear Information System (INIS)

    Massardo, Teresa; Jofre, Josefina; Canessa, Jose; Gonzalez, Patricio; Humeres, Pamela; Sierralta, Paulina; Galaz, Rodrigo; Miranda, Karina

    2004-01-01

    Full text: The objective of this preliminary communication was to analyse the indications and data in initial group of patients studied with first dedicated PET scanner in the country at Hospital Militar in Santiago Chile. The main application of positron emission tomography (PET) with 18-Fluoro deoxyglucose (FDG) is related with oncological patients management. We studied 136 patients, 131 (97%) with known or suspected malignant disease and remaining 5 for cardiological or neuropsychiatric disease. Ten patients were controlled diabetics (1 insulin dependent). Their mean age was 51.6±18 years ranging from 6 to 84 years and 65% were females. A total of 177 scans were acquired using a dedicated PET (Siemens HR + with 4mm resolution) system. Mean F18-FDG injected dose was 477±107 MBq (12.9±2.9 mCi). Mean blood glucose levels, performed prior the injection, were 94±17mg/dl (range 62-161). F18-FDG was obtained from the cyclotron IBA Cyclone 18/9 installed in the Chilean Agency of Nuclear Energy, distant about 15 miles away from the clinical PET facility. PET studies were analyzed by at least 4 independent observers visually. Standardized uptake value (SUV) was calculated in some cases. Image fusion of FDG images with recent anatomical (CT, MRI) studies was performed where available. Data acquisition protocol consisted in 7-8 beds/study from head to mid-thighs, with 6-7-min/bed acquisitions, 36% transmission with germanium 68 rods. Data was reconstructed with standard OSEM protocol. The main indications included pulmonary lesions in 31%, gastrointestinal cancers in 21%, melanoma in 13% and lymphoma in 9% patients. The remaining were of breast, thyroid, testes, ovary, musculoskeletal (soft tissue and bone), brain tumour etc. Abnormal focal tracer uptake was observed in 83/131 oncological patients, 54% corroborating with clinical diagnosis of primary tumor or recurrence while 46% showed new metastatic localization. FDG scans were normal 36/131 patients. In 9 patients

  14. Experience with PET FDG - Preliminary analysis

    Energy Technology Data Exchange (ETDEWEB)

    Massardo, Teresa; Jofre, Josefina; Canessa, Jose; Gonzalez, Patricio; Humeres, Pamela; Sierralta, Paulina; Galaz, Rodrigo; Miranda, Karina [Centro PET de Imagenes Moleculares, Hospital Militar de Santiago, Santiago (Chile)

    2004-01-01

    Full text: The objective of this preliminary communication was to analyse the indications and data in initial group of patients studied with first dedicated PET scanner in the country at Hospital Militar in Santiago Chile. The main application of positron emission tomography (PET) with 18-Fluoro deoxyglucose (FDG) is related with oncological patients management. We studied 136 patients, 131 (97%) with known or suspected malignant disease and remaining 5 for cardiological or neuropsychiatric disease. Ten patients were controlled diabetics (1 insulin dependent). Their mean age was 51.6{+-}18 years ranging from 6 to 84 years and 65% were females. A total of 177 scans were acquired using a dedicated PET (Siemens HR + with 4mm resolution) system. Mean F18-FDG injected dose was 477{+-}107 MBq (12.9{+-}2.9 mCi). Mean blood glucose levels, performed prior the injection, were 94{+-}17mg/dl (range 62-161). F18-FDG was obtained from the cyclotron IBA Cyclone 18/9 installed in the Chilean Agency of Nuclear Energy, distant about 15 miles away from the clinical PET facility. PET studies were analyzed by at least 4 independent observers visually. Standardized uptake value (SUV) was calculated in some cases. Image fusion of FDG images with recent anatomical (CT, MRI) studies was performed where available. Data acquisition protocol consisted in 7-8 beds/study from head to mid-thighs, with 6-7-min/bed acquisitions, 36% transmission with germanium 68 rods. Data was reconstructed with standard OSEM protocol. The main indications included pulmonary lesions in 31%, gastrointestinal cancers in 21%, melanoma in 13% and lymphoma in 9% patients. The remaining were of breast, thyroid, testes, ovary, musculoskeletal (soft tissue and bone), brain tumour etc. Abnormal focal tracer uptake was observed in 83/131 oncological patients, 54% corroborating with clinical diagnosis of primary tumor or recurrence while 46% showed new metastatic localization. FDG scans were normal 36/131 patients. In 9

  15. Preliminary engineering analysis for clothes washers

    Energy Technology Data Exchange (ETDEWEB)

    Biermayer, Peter J.

    1996-10-01

    The Engineering Analysis provides information on efficiencies, manufacturer costs, and other characteristics of the appliance class being analyzed. For clothes washers, there are two classes: standard and compact. Since data were not available to analyze the compact class, only clothes washers were analyzed in this report. For this analysis, individual design options were combined and ordered in a manner that resulted in the lowest cumulative cost/savings ratio. The cost/savings ratio is the increase in manufacturer cost for a design option divided by the reduction in operating costs due to fuel and water savings.

  16. Effect of preliminary annealing of silicon substrates on the spectral sensitivity of photodetectors in bipolar integrated circuits

    International Nuclear Information System (INIS)

    Blynskij, V.I.; Bozhatkin, O.A.; Golub, E.S.; Lemeshevskaya, A.M.; Shvedov, S.V.

    2010-01-01

    We examine the results of an effect of preliminary annealing on the spectral sensitivity of photodetectors in bipolar integrated circuits, formed in silicon grown by the Czochralski method. We demonstrate the possibility of substantially improving the sensitivity of photodetectors in the infrared region of the spectrum with twostep annealing. The observed effect is explained by participation of oxidation in the gettering process, where oxidation precedes formation of a buried n + layer in the substrate. (authors)

  17. Demonstration sensitivity analysis for RADTRAN III

    International Nuclear Information System (INIS)

    Neuhauser, K.S.; Reardon, P.C.

    1986-10-01

    A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves

  18. Systemization of burnup sensitivity analysis code. 2

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2005-02-01

    Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of criticality experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons; the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For

  19. Systemization of burnup sensitivity analysis code

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2004-02-01

    To practical use of fact reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoints of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor core 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, development of a analysis code for burnup sensitivity, SAGEP-BURN, has been done and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to user due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functionalities in the existing large system. It is not sufficient to unify each computational component for some reasons; computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For this

  20. A Monte Carlo/response surface strategy for sensitivity analysis: application to a dynamic model of vegetative plant growth

    Science.gov (United States)

    Lim, J. T.; Gold, H. J.; Wilkerson, G. G.; Raper, C. D. Jr; Raper CD, J. r. (Principal Investigator)

    1989-01-01

    We describe the application of a strategy for conducting a sensitivity analysis for a complex dynamic model. The procedure involves preliminary screening of parameter sensitivities by numerical estimation of linear sensitivity coefficients, followed by generation of a response surface based on Monte Carlo simulation. Application is to a physiological model of the vegetative growth of soybean plants. The analysis provides insights as to the relative importance of certain physiological processes in controlling plant growth. Advantages and disadvantages of the strategy are discussed.

  1. Preliminary analysis of the KAERI RCCS Experiment Using GAMMA+

    Energy Technology Data Exchange (ETDEWEB)

    Khoza, Samukelisiwe; Tak, Nam-il; Lim, Hong-Sik; Lee, Sung-Nam; Cho, Bong-Hyun; Kim, Jong-Hwan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    This paper describes the analysis of the KAERI RCCS experiment. GAMMA+ code was used for analysis of the RCCS 1/4-scale natural cooling experimental facility designed and built at KAERI to verify the performance of the natural circulation phenomenon. The results obtained from the GAMMA+ analysis showing the temperature profiles and flow rates at steady state were compared with the results from the preliminary experiments conducted in this facility. GAMMA+ analysis for the KAERI RCCS experimental setup was carried out to understand its natural circulation behavior. The air flow rate at the chimney exit achieved by experiments was from to be almost same as that of GAMMA+.

  2. Yucca Mountain transportation routes: Preliminary characterization and risk analysis; Volume 2, Figures [and] Volume 3, Technical Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Souleyrette, R.R. II; Sathisan, S.K.; di Bartolo, R. [Nevada Univ., Las Vegas, NV (United States). Transportation Research Center

    1991-05-31

    This report presents appendices related to the preliminary assessment and risk analysis for high-level radioactive waste transportation routes to the proposed Yucca Mountain Project repository. Information includes data on population density, traffic volume, ecologically sensitive areas, and accident history.

  3. Gravity field of Venus - A preliminary analysis

    Science.gov (United States)

    Phillips, R. J.; Sjogren, W. L.; Abbott, E. A.; Smith, J. C.; Wimberly, R. N.; Wagner, C. A.

    1979-01-01

    The gravitational field of Venus obtained by tracking the Pioneer Venus Orbiter is examined. For each spacecraft orbit, two hours of Doppler data centered around periapsis were used to estimate spacecraft position and velocity and the velocity residuals obtained were spline fit and differentiated to produce line of sight gravitational accelerations. Consistent variations in line of sight accelerations from orbit to orbit reveal the presence of gravitational anomalies. A simulation of isostatic compensation for an elevated region on the surface of Venus indicates that the mean depth of compensation is no greater than about 100 km. Gravitational spectra obtained from a Fourier analysis of line of sight accelerations from selected Venus orbits are compared to the earth's gravitational spectrum and spherical harmonic gravitational potential power spectra of the earth, the moon and Mars. The Venus power spectrum is found to be remarkably similar to that of the earth, however systematic variations in the harmonics suggest differences in dynamic processes or lithospheric behavior.

  4. Sensitivity analysis of critical experiment with direct perturbation compared to TSUNAMI-3D sensitivity analysis

    International Nuclear Information System (INIS)

    Barber, A. D.; Busch, R.

    2009-01-01

    The goal of this work is to obtain sensitivities from direct uncertainty analysis calculation and correlate those calculated values with the sensitivities produced from TSUNAMI-3D (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation in Three Dimensions). A full sensitivity analysis is performed on a critical experiment to determine the overall uncertainty of the experiment. Small perturbation calculations are performed for all known uncertainties to obtain the total uncertainty of the experiment. The results from a critical experiment are only known as well as the geometric and material properties. The goal of this relationship is to simplify the uncertainty quantification process in assessing a critical experiment, while still considering all of the important parameters. (authors)

  5. Sensitivity analysis of the Two Geometry Method

    International Nuclear Information System (INIS)

    Wichers, V.A.

    1993-09-01

    The Two Geometry Method (TGM) was designed specifically for the verification of the uranium enrichment of low enriched UF 6 gas in the presence of uranium deposits on the pipe walls. Complications can arise if the TGM is applied under extreme conditions, such as deposits larger than several times the gas activity, small pipe diameters less than 40 mm and low pressures less than 150 Pa. This report presents a comprehensive sensitivity analysis of the TGM. The impact of the various sources of uncertainty on the performance of the method is discussed. The application to a practical case is based on worst case conditions with regards to the measurement conditions, and on realistic conditions with respect to the false alarm probability and the non detection probability. Monte Carlo calculations were used to evaluate the sensitivity for sources of uncertainty which are experimentally inaccessible. (orig.)

  6. Sensitivity analysis of numerical model of prestressed concrete containment

    Energy Technology Data Exchange (ETDEWEB)

    Bílý, Petr, E-mail: petr.bily@fsv.cvut.cz; Kohoutková, Alena, E-mail: akohout@fsv.cvut.cz

    2015-12-15

    Graphical abstract: - Highlights: • FEM model of prestressed concrete containment with steel liner was created. • Sensitivity analysis of changes in geometry and loads was conducted. • Steel liner and temperature effects are the most important factors. • Creep and shrinkage parameters are essential for the long time analysis. • Prestressing schedule is a key factor in the early stages. - Abstract: Safety is always the main consideration in the design of containment of nuclear power plant. However, efficiency of the design process should be also taken into consideration. Despite the advances in computational abilities in recent years, simplified analyses may be found useful for preliminary scoping or trade studies. In the paper, a study on sensitivity of finite element model of prestressed concrete containment to changes in geometry, loads and other factors is presented. Importance of steel liner, reinforcement, prestressing process, temperature changes, nonlinearity of materials as well as density of finite elements mesh is assessed in the main stages of life cycle of the containment. Although the modeling adjustments have not produced any significant changes in computation time, it was found that in some cases simplified modeling process can lead to significant reduction of work time without degradation of the results.

  7. Advanced high conversion PWR: preliminary analysis

    International Nuclear Information System (INIS)

    Golfier, H.; Bellanger, V.; Bergeron, A.; Dolci, F.; Gastaldi, B.; Koberl, O.; Mignot, G.; Thevenot, C.

    2007-01-01

    In this paper, physical aspects of a HCPWR (High Conversion Light Water Reactor), which is an innovative PWR fuelled with mixed oxide and having a higher conversion ratio due to a lower moderation ratio. Moderation ratios lower than unity are considered which has led to low moderation PWR fuel assembly designs. The objectives of this parametric study are to define a feasibility area with regard to the following neutronic aspects: moderation ratio, Pu loading, reactor spectrum, irradiation time, and neutronic coefficients. Important thermohydraulic parameters are the pressure drop, the critical heat flux, the maximum temperature in the fuel rod and the pumping power. The thermohydraulic analysis shows that a range of moderation ratios from 0.8 to 1.2 is technically possible. A compromise between improved fuel utilization and research and development effort has been found for the moderation ration of about 1. The parametric study shows that there are 2 ranges of interest for the moderation ratio: -) moderation ratio between 0.8 and 1.2 with reduced fissile heights (> 3 m), hexagonal arrangement fuel assembly and square arrangement fuel assembly are possible; and -) moderation between 0.6 and 0.7 with a modification of the reactor operating conditions (reduction of the primary flow and of the thermal power), the fuel rods could be arranged inside a hexagonal fuel rod assembly. (A.C.)

  8. A preliminary analysis of bidayuh Jagoi patun

    Directory of Open Access Journals (Sweden)

    Dayang Sariah Abang Suhai

    2013-12-01

    Full Text Available Bidayuh Pantun or Patun remains a under researched topic in Borneo studies and language research due to the difficulties associated with obtaining critical, poetic information in oral culture, language variations and societal mobility. Existing data from anthologies however provide little detail about the instrinsic and extrinsic features ascribed to the poem by the people who produce and use them. This paper attempts to explore patun from the Jagoi community. In this study, the structural aspects, themes and moral values of 47 patun from the Jagoi community were analysed. The initial explanations suggested by the poet were further analysed to determine the various structural features to place it alongside existing mainstream lyric poetry. The analysis of the intrinsic features showed that good rhythmic patun has four to six words per line and eight to 12 syllables per line, and the final syllables of each line has assonance and consonance patterns of a-a-a-a and a-b-a-b. The themes of the patun include love, advice, forgiveness, beliefs, hopelessness and happiness, and the moral values take the form of subtle advice and admonishments. The Bidayuh patun is indeed a projection of knowledge, experiences, beliefs, values, and emotions of the community.

  9. Surface Properties of TNOs: Preliminary Statistical Analysis

    Science.gov (United States)

    Antonieta Barucci, Maria; Fornasier, S.; Alvarez-Cantal, A.; de Bergh, C.; Merlin, F.; DeMeo, F.; Dumas, C.

    2009-09-01

    An overview of the surface properties based on the last results obtained during the Large Program performed at ESO-VLT (2007-2008) will be presented. Simultaneous high quality visible and near-infrared spectroscopy and photometry have been carried out on 40 objects with various dynamical properties, using FORS1 (V), ISAAC (J) and SINFONI (H+K bands) mounted respectively at UT2, UT1 and UT4 VLT-ESO telescopes (Cerro Paranal, Chile). For spectroscopy we computed the spectral slope for each object and searched for possible rotational inhomogeneities. A few objects show features in their visible spectra such as Eris, whose spectral bands are displaced with respect to pure methane-ice. We identify new faint absorption features on 10199 Chariklo and 42355 Typhon, possibly due to the presence of aqueous altered materials. The H+K band spectroscopy was performed with the new instrument SINFONI which is a 3D integral field spectrometer. While some objects show no diagnostic spectral bands, others reveal surface deposits of ices of H2O, CH3OH, CH4, and N2. To investigate the surface properties of these bodies, a radiative transfer model has been applied to interpret the entire 0.4-2.4 micron spectral region. The diversity of the spectra suggests that these objects represent a substantial range of bulk compositions. These different surface compositions can be diagnostic of original compositional diversity, interior source and/or different evolution with different physical processes affecting the surfaces. A statistical analysis is in progress to investigate the correlation of the TNOs’ surface properties with size and dynamical properties.

  10. Preliminary Seismic Response and Fragility Analysis for DACS Cabinet

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jinho; Kwag, Shinyoung; Lee, Jongmin; Kim, Youngki [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-05-15

    A DACS cabinet is installed in the main control room. The objective of this paper is to perform seismic analyses and evaluate the preliminary structural integrity and seismic capacity of the DACS cabinet. For this purpose, a 3-D finite element model of the DACS cabinet was developed and its modal analyses are carried out to analyze the dynamic characteristics. The response spectrum analyses and the related safety evaluation are then performed for the DACS cabinet subject to seismic loads. Finally, the seismic margin and seismic fragility of the DACS cabinet are investigated. A seismic analysis and preliminary structural integrity of the DACS cabinet under self weight and SSE load have been evaluated. For this purpose, 3-D finite element models of the DACS cabinet were developed. A modal analysis, response spectrum analysis, and seismic fragility analysis were then performed. From the structural analysis results, the DACS cabinet is below the structural design limit of under SSE 0.3g, and can structurally withstand until less than SSE 3g based on an evaluation of the maximum effective stresses. The HCLPF capacity for the DGRS of the SSE 0.3g is 0.55g. A modal analysis, response spectrum analysis, and seismic fragility analysis were then performed. From the structural analysis results, the DACS cabinet is below the structural design limit of under SSE 0.3g, and can structurally withstand until less than SSE 3g based on an evaluation of the maximum effective stresses. The HCLPF capacity for the DGRS of the SSE 0.3g is 0.55g. Therefore, it is concluded that the DACS cabinet was safely designed in that no damage to the preliminary structural integrity and sufficient seismic margin is expected.

  11. Preliminary Seismic Response and Fragility Analysis for DACS Cabinet

    International Nuclear Information System (INIS)

    Oh, Jinho; Kwag, Shinyoung; Lee, Jongmin; Kim, Youngki

    2013-01-01

    A DACS cabinet is installed in the main control room. The objective of this paper is to perform seismic analyses and evaluate the preliminary structural integrity and seismic capacity of the DACS cabinet. For this purpose, a 3-D finite element model of the DACS cabinet was developed and its modal analyses are carried out to analyze the dynamic characteristics. The response spectrum analyses and the related safety evaluation are then performed for the DACS cabinet subject to seismic loads. Finally, the seismic margin and seismic fragility of the DACS cabinet are investigated. A seismic analysis and preliminary structural integrity of the DACS cabinet under self weight and SSE load have been evaluated. For this purpose, 3-D finite element models of the DACS cabinet were developed. A modal analysis, response spectrum analysis, and seismic fragility analysis were then performed. From the structural analysis results, the DACS cabinet is below the structural design limit of under SSE 0.3g, and can structurally withstand until less than SSE 3g based on an evaluation of the maximum effective stresses. The HCLPF capacity for the DGRS of the SSE 0.3g is 0.55g. A modal analysis, response spectrum analysis, and seismic fragility analysis were then performed. From the structural analysis results, the DACS cabinet is below the structural design limit of under SSE 0.3g, and can structurally withstand until less than SSE 3g based on an evaluation of the maximum effective stresses. The HCLPF capacity for the DGRS of the SSE 0.3g is 0.55g. Therefore, it is concluded that the DACS cabinet was safely designed in that no damage to the preliminary structural integrity and sufficient seismic margin is expected

  12. Sensitivity analysis of reactive ecological dynamics.

    Science.gov (United States)

    Verdy, Ariane; Caswell, Hal

    2008-08-01

    Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.

  13. Global sensitivity analysis using polynomial chaos expansions

    International Nuclear Information System (INIS)

    Sudret, Bruno

    2008-01-01

    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices

  14. Global sensitivity analysis using polynomial chaos expansions

    Energy Technology Data Exchange (ETDEWEB)

    Sudret, Bruno [Electricite de France, R and D Division, Site des Renardieres, F 77818 Moret-sur-Loing Cedex (France)], E-mail: bruno.sudret@edf.fr

    2008-07-15

    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices.

  15. Contributions to sensitivity analysis and generalized discriminant analysis

    International Nuclear Information System (INIS)

    Jacques, J.

    2005-12-01

    Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)

  16. Simple Sensitivity Analysis for Orion GNC

    Science.gov (United States)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  17. Sensitivity analysis of floating offshore wind farms

    International Nuclear Information System (INIS)

    Castro-Santos, Laura; Diaz-Casas, Vicente

    2015-01-01

    Highlights: • Develop a sensitivity analysis of a floating offshore wind farm. • Influence on the life-cycle costs involved in a floating offshore wind farm. • Influence on IRR, NPV, pay-back period, LCOE and cost of power. • Important variables: distance, wind resource, electric tariff, etc. • It helps to investors to take decisions in the future. - Abstract: The future of offshore wind energy will be in deep waters. In this context, the main objective of the present paper is to develop a sensitivity analysis of a floating offshore wind farm. It will show how much the output variables can vary when the input variables are changing. For this purpose two different scenarios will be taken into account: the life-cycle costs involved in a floating offshore wind farm (cost of conception and definition, cost of design and development, cost of manufacturing, cost of installation, cost of exploitation and cost of dismantling) and the most important economic indexes in terms of economic feasibility of a floating offshore wind farm (internal rate of return, net present value, discounted pay-back period, levelized cost of energy and cost of power). Results indicate that the most important variables in economic terms are the number of wind turbines and the distance from farm to shore in the costs’ scenario, and the wind scale parameter and the electric tariff for the economic indexes. This study will help investors to take into account these variables in the development of floating offshore wind farms in the future

  18. Preliminary Mass Spectrometric Analysis of Uranium on Environmental Swipe Materials

    International Nuclear Information System (INIS)

    Cheong, Chang-Sik; Jeong, Youn-Joong; Ryu, Jong-Sik; Shin, Hyung-Seon; Cha, Hyun-Ju; Ahn, Gil-Hoon; Park, Il-Jin; Min, Gyung-Sik

    2006-01-01

    It is well-known that uranium and plutonium isotopic compositions of safeguards samples are very useful to investigate the history of nuclear activities. To strengthen the capabilities of environmental sampling analysis in the ROK through MOST/DOE collaboration, round robin test for uranium and plutonium was designed in 2003. As the first round robin test, a set of dried uranium-containing solutions (∼35ng and (∼300ng) was distributed to the participating laboratories in November of 2003, with results reported in April of 2004. The KBSI (Korea Basic Science Institute) and ORNL (Oak Ridge National Laboratory) are currently in the process of analyzing uranium on cotton swipes for the second round robin test. As a preliminary test for the second round, KBSI intends to analyze home-made swipe samples into which international uranium standards are added. Here we describe technical steps of sample preparation and mass spectrometry at KBSI, and report some results of the preliminary test

  19. Active cooling for downhole instrumentation: Preliminary analysis and system selection

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, G.A.

    1988-03-01

    A feasibility study and a series of preliminary designs and analyses were done to identify candidate processes or cycles for use in active cooling systems for downhole electronic instruments. A matrix of energy types and their possible combinations was developed and the energy conversion process for each pari was identified. The feasibility study revealed conventional as well as unconventional processes and possible refrigerants and identified parameters needing further clarifications. A conceptual design or series od oesigns for each system was formulated and a preliminary analysis of each design was completed. The resulting coefficient of performance for each system was compared with the Carnot COP and all systems were ranked by decreasing COP. The system showing the best combination of COP, exchangeability to other operating conditions, failure mode, and system serviceability is chosen for use as a downhole refrigerator. 85 refs., 48 figs., 33 tabs.

  20. Preliminary study of soil permeability properties using principal component analysis

    Science.gov (United States)

    Yulianti, M.; Sudriani, Y.; Rustini, H. A.

    2018-02-01

    Soil permeability measurement is undoubtedly important in carrying out soil-water research such as rainfall-runoff modelling, irrigation water distribution systems, etc. It is also known that acquiring reliable soil permeability data is rather laborious, time-consuming, and costly. Therefore, it is desirable to develop the prediction model. Several studies of empirical equations for predicting permeability have been undertaken by many researchers. These studies derived the models from areas which soil characteristics are different from Indonesian soil, which suggest a possibility that these permeability models are site-specific. The purpose of this study is to identify which soil parameters correspond strongly to soil permeability and propose a preliminary model for permeability prediction. Principal component analysis (PCA) was applied to 16 parameters analysed from 37 sites consist of 91 samples obtained from Batanghari Watershed. Findings indicated five variables that have strong correlation with soil permeability, and we recommend a preliminary permeability model, which is potential for further development.

  1. Preliminary Safety Analysis Report for the Tokamak Physics Experiment

    International Nuclear Information System (INIS)

    Motloch, C.G.; Bonney, R.F.; Levine, J.D.; Masson, L.S.; Commander, J.C.

    1995-04-01

    This Preliminary Safety Analysis Report (PSAR), includes an indication of the magnitude of facility hazards, complexity of facility operations, and the stage of the facility life-cycle. It presents the results of safety analyses, safety assurance programs, identified vulnerabilities, compensatory measures, and, in general, the rationale describing why the Tokamak Physics Experiment (TPX) can be safely operated. It discusses application of the graded approach to the TPX safety analysis, including the basis for using Department of Energy (DOE) Order 5480.23 and DOE-STD-3009-94 in the development of the PSAR

  2. Sensitivity analysis of a modified energy model

    International Nuclear Information System (INIS)

    Suganthi, L.; Jagadeesan, T.R.

    1997-01-01

    Sensitivity analysis is carried out to validate model formulation. A modified model has been developed to predict the future energy requirement of coal, oil and electricity, considering price, income, technological and environmental factors. The impact and sensitivity of the independent variables on the dependent variable are analysed. The error distribution pattern in the modified model as compared to a conventional time series model indicated the absence of clusters. The residual plot of the modified model showed no distinct pattern of variation. The percentage variation of error in the conventional time series model for coal and oil ranges from -20% to +20%, while for electricity it ranges from -80% to +20%. However, in the case of the modified model the percentage variation in error is greatly reduced - for coal it ranges from -0.25% to +0.15%, for oil -0.6% to +0.6% and for electricity it ranges from -10% to +10%. The upper and lower limit consumption levels at 95% confidence is determined. The consumption at varying percentage changes in price and population are analysed. The gap between the modified model predictions at varying percentage changes in price and population over the years from 1990 to 2001 is found to be increasing. This is because of the increasing rate of energy consumption over the years and also the confidence level decreases as the projection is made far into the future. (author)

  3. Sensitivity Analysis for Design Optimization Integrated Software Tools, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposed project is to provide a new set of sensitivity analysis theory and codes, the Sensitivity Analysis for Design Optimization Integrated...

  4. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  5. A new importance measure for sensitivity analysis

    International Nuclear Information System (INIS)

    Liu, Qiao; Homma, Toshimitsu

    2010-01-01

    Uncertainty is an integral part of risk assessment of complex engineering systems, such as nuclear power plants and space crafts. The aim of sensitivity analysis is to identify the contribution of the uncertainty in model inputs to the uncertainty in the model output. In this study, a new importance measure that characterizes the influence of the entire input distribution on the entire output distribution was proposed. It represents the expected deviation of the cumulative distribution function (CDF) of the model output that would be obtained when one input parameter of interest were known. The applicability of this importance measure was tested with two models, a nonlinear nonmonotonic mathematical model and a risk model. In addition, a comparison of this new importance measure with several other importance measures was carried out and the differences between these measures were explained. (author)

  6. DEA Sensitivity Analysis for Parallel Production Systems

    Directory of Open Access Journals (Sweden)

    J. Gerami

    2011-06-01

    Full Text Available In this paper, we introduce systems consisting of several production units, each of which include several subunits working in parallel. Meanwhile, each subunit is working independently. The input and output of each production unit are the sums of the inputs and outputs of its subunits, respectively. We consider each of these subunits as an independent decision making unit(DMU and create the production possibility set(PPS produced by these DMUs, in which the frontier points are considered as efficient DMUs. Then we introduce models for obtaining the efficiency of the production subunits. Using super-efficiency models, we categorize all efficient subunits into different efficiency classes. Then we follow by presenting the sensitivity analysis and stability problem for efficient subunits, including extreme efficient and non-extreme efficient subunits, assuming simultaneous perturbations in all inputs and outputs of subunits such that the efficiency of the subunit under evaluation declines while the efficiencies of other subunits improve.

  7. Sensitivity of SBLOCA analysis to model nodalization

    International Nuclear Information System (INIS)

    Lee, C.; Ito, T.; Abramson, P.B.

    1983-01-01

    The recent Semiscale test S-UT-8 indicates the possibility for primary liquid to hang up in the steam generators during a SBLOCA, permitting core uncovery prior to loop-seal clearance. In analysis of Small Break Loss of Coolant Accidents with RELAP5, it is found that resultant transient behavior is quite sensitive to the selection of nodalization for the steam generators. Although global parameters such as integrated mass loss, primary inventory and primary pressure are relatively insensitive to the nodalization, it is found that the predicted distribution of inventory around the primary is significantly affected by nodalization. More detailed nodalization predicts that more of the inventory tends to remain in the steam generators, resulting in less inventory in the reactor vessel and therefore causing earlier and more severe core uncovery

  8. Subset simulation for structural reliability sensitivity analysis

    International Nuclear Information System (INIS)

    Song Shufang; Lu Zhenzhou; Qiao Hongwei

    2009-01-01

    Based on two procedures for efficiently generating conditional samples, i.e. Markov chain Monte Carlo (MCMC) simulation and importance sampling (IS), two reliability sensitivity (RS) algorithms are presented. On the basis of reliability analysis of Subset simulation (Subsim), the RS of the failure probability with respect to the distribution parameter of the basic variable is transformed as a set of RS of conditional failure probabilities with respect to the distribution parameter of the basic variable. By use of the conditional samples generated by MCMC simulation and IS, procedures are established to estimate the RS of the conditional failure probabilities. The formulae of the RS estimator, its variance and its coefficient of variation are derived in detail. The results of the illustrations show high efficiency and high precision of the presented algorithms, and it is suitable for highly nonlinear limit state equation and structural system with single and multiple failure modes

  9. Waste Feed Delivery System Phase 1 Preliminary RAM Analysis

    International Nuclear Information System (INIS)

    DYKES, A.A.

    2000-01-01

    This report presents the updated results of the preliminary reliability, availability, and maintainability (RAM) analysis of selected waste feed delivery (WFD) operations to be performed by the Tank Farm Contractor (TFC) during Phase I activities in support of the Waste Treatment and Immobilization Plant (WTP). For planning purposes, waste feed tanks are being divided into five classes in accordance with the type of waste in each tank and the activities required to retrieve, qualify, and transfer waste feed. This report reflects the baseline design and operating concept, as of the beginning of Fiscal Year 2000, for the delivery of feed from three of these classes, represented by source tanks 241-AN-102, 241-AZ-101 and 241-AN-105. The preliminary RAM analysis quantifies the potential schedule delay associated with operations and maintenance (OBM) field activities needed to accomplish these operations. The RAM analysis is preliminary because the system design, process definition, and activity planning are in a state of evolution. The results are being used to support the continuing development of an O and M Concept tailored to the unique requirements of the WFD Program, which is being documented in various volumes of the Waste Feed Delivery Technical Basis (Carlson. 1999, Rasmussen 1999, and Orme 2000). The waste feed provided to the WTP must: (1) meet limits for chemical and radioactive constituents based on pre-established compositional envelopes (i.e., feed quality); (2) be in acceptable quantities within a prescribed sequence to meet feed quantities; and (3) meet schedule requirements (i.e., feed timing). In the absence of new criteria related to acceptable schedule performance due to the termination of the TWRS Privatization Contract, the original criteria from the Tank Waste Remediation System (77443s) Privatization Contract (DOE 1998) will continue to be used for this analysis

  10. Preliminary analysis of a target factory for laser fusion

    International Nuclear Information System (INIS)

    Sherohman, J.W.; Hendricks, C.D.

    1980-01-01

    An analysis of a target factory leading to the determination of production expressions has provided for the basis of a parametric study. Parameters involving the input and output rate of a process system, processing yield factors, and multiple processing steps and production lines have been used to develop an understanding of their dependence on the rate of target injection for laser fusion. Preliminary results have indicated that a parametric study of this type will be important in the selection of processing methods to be used in the final production scheme of a target factory

  11. Determinants of Trade Credit: A Preliminary Analysis on Construction Sector

    Directory of Open Access Journals (Sweden)

    Nicoleta Barbuta-Misu

    2016-07-01

    Full Text Available This paper introduces a preliminary analysis of the correlations between trade credit and some selected measures of financial performance for a sample of 958 firms acting in the construction sector. The examined period covers 2004-2013. The sample derived from Amadeus database contains firms that have sold and bought on credit. Results showed that larger firms offered and used more credit than counterparties. Firms offered and used in same time credit, but not in same level. Firms with higher return on assets and profit margin used and offered less credit from suppliers, respectively to clients. Moreover, more liquid firms used less trade payables.

  12. Calibration, validation, and sensitivity analysis: What's what

    International Nuclear Information System (INIS)

    Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.

    2006-01-01

    One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty

  13. Global sensitivity analysis in wind energy assessment

    Science.gov (United States)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present

  14. Frontier Assignment for Sensitivity Analysis of Data Envelopment Analysis

    Science.gov (United States)

    Naito, Akio; Aoki, Shingo; Tsuji, Hiroshi

    To extend the sensitivity analysis capability for DEA (Data Envelopment Analysis), this paper proposes frontier assignment based DEA (FA-DEA). The basic idea of FA-DEA is to allow a decision maker to decide frontier intentionally while the traditional DEA and Super-DEA decide frontier computationally. The features of FA-DEA are as follows: (1) provides chances to exclude extra-influential DMU (Decision Making Unit) and finds extra-ordinal DMU, and (2) includes the function of the traditional DEA and Super-DEA so that it is able to deal with sensitivity analysis more flexibly. Simple numerical study has shown the effectiveness of the proposed FA-DEA and the difference from the traditional DEA.

  15. CONTENT ANALYSIS, DISCOURSE ANALYSIS, AND CONVERSATION ANALYSIS: PRELIMINARY STUDY ON CONCEPTUAL AND THEORETICAL METHODOLOGICAL DIFFERENCES

    Directory of Open Access Journals (Sweden)

    Anderson Tiago Peixoto Gonçalves

    2016-08-01

    Full Text Available This theoretical essay aims to reflect on three models of text interpretation used in qualitative research, which is often confused in its concepts and methodologies (Content Analysis, Discourse Analysis, and Conversation Analysis. After the presentation of the concepts, the essay proposes a preliminary discussion on conceptual and theoretical methodological differences perceived between them. A review of the literature was performed to support the conceptual and theoretical methodological discussion. It could be verified that the models have differences related to the type of strategy used in the treatment of texts, the type of approach, and the appropriate theoretical position.

  16. Sensitivity analysis of Smith's AMRV model

    International Nuclear Information System (INIS)

    Ho, Chih-Hsiang

    1995-01-01

    Multiple-expert hazard/risk assessments have considerable precedent, particularly in the Yucca Mountain site characterization studies. In this paper, we present a Bayesian approach to statistical modeling in volcanic hazard assessment for the Yucca Mountain site. Specifically, we show that the expert opinion on the site disruption parameter p is elicited on the prior distribution, π (p), based on geological information that is available. Moreover, π (p) can combine all available geological information motivated by conflicting but realistic arguments (e.g., simulation, cluster analysis, structural control, etc.). The incorporated uncertainties about the probability of repository disruption p, win eventually be averaged out by taking the expectation over π (p). We use the following priors in the analysis: priors chosen for mathematical convenience: Beta (r, s) for (r, s) = (2, 2), (3, 3), (5, 5), (2, 1), (2, 8), (8, 2), and (1, 1); and three priors motivated by expert knowledge. Sensitivity analysis is performed for each prior distribution. Estimated values of hazard based on the priors chosen for mathematical simplicity are uniformly higher than those obtained based on the priors motivated by expert knowledge. And, the model using the prior, Beta (8,2), yields the highest hazard (= 2.97 X 10 -2 ). The minimum hazard is produced by the open-quotes three-expert priorclose quotes (i.e., values of p are equally likely at 10 -3 10 -2 , and 10 -1 ). The estimate of the hazard is 1.39 x which is only about one order of magnitude smaller than the maximum value. The term, open-quotes hazardclose quotes, is defined as the probability of at least one disruption of a repository at the Yucca Mountain site by basaltic volcanism for the next 10,000 years

  17. Enhanced Accident Tolerant Fuels for LWRS - A Preliminary Systems Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gilles Youinou; R. Sonat Sen

    2013-09-01

    The severe accident at Fukushima Daiichi nuclear plants illustrates the need for continuous improvements through developing and implementing technologies that contribute to safe, reliable and cost-effective operation of the nuclear fleet. Development of enhanced accident tolerant fuel contributes to this effort. These fuels, in comparison with the standard zircaloy – UO2 system currently used by the LWR industry, should be designed such that they tolerate loss of active cooling in the core for a longer time period (depending on the LWR system and accident scenario) while maintaining or improving the fuel performance during normal operations, operational transients, and design-basis events. This report presents a preliminary systems analysis related to most of these concepts. The potential impacts of these innovative LWR fuels on the front-end of the fuel cycle, on the reactor operation and on the back-end of the fuel cycle are succinctly described without having the pretension of being exhaustive. Since the design of these various concepts is still a work in progress, this analysis can only be preliminary and could be updated as the designs converge on their respective final version.

  18. Wear-Out Sensitivity Analysis Project Abstract

    Science.gov (United States)

    Harris, Adam

    2015-01-01

    During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.

  19. Supercritical extraction of oleaginous: parametric sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Santos M.M.

    2000-01-01

    Full Text Available The economy has become universal and competitive, thus the industries of vegetable oil extraction must advance in the sense of minimising production costs and, at the same time, generating products that obey more rigorous patterns of quality, including solutions that do not damage the environment. The conventional oilseed processing uses hexane as solvent. However, this solvent is toxic and highly flammable. Thus the search of substitutes for hexane in oleaginous extraction process has increased in the last years. The supercritical carbon dioxide is a potential substitute for hexane, but it is necessary more detailed studies to understand the phenomena taking place in such process. Thus, in this work a diffusive model for semi-continuous (batch for the solids and continuous for the solvent isothermal and isobaric extraction process using supercritical carbon dioxide is presented and submitted to a parametric sensitivity analysis by means of a factorial design in two levels. The model parameters were disturbed and their main effects analysed, so that it is possible to propose strategies for high performance operation.

  20. Sensitivity analysis of ranked data: from order statistics to quantiles

    NARCIS (Netherlands)

    Heidergott, B.F.; Volk-Makarewicz, W.

    2015-01-01

    In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before

  1. Preliminary Structural Sensitivity Study of Hypersonic Inflatable Aerodynamic Decelerator Using Probabilistic Methods

    Science.gov (United States)

    Lyle, Karen H.

    2014-01-01

    Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology validation via flighttesting. This paper explores the implementation of probabilistic methods in the sensitivity analysis of the structural response of a Hypersonic Inflatable Aerodynamic Decelerator (HIAD). HIAD architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during re-entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. In the example presented here, the structural parameters of an existing HIAD model have been varied to illustrate the design approach utilizing uncertainty-based methods. Surrogate models have been used to reduce computational expense several orders of magnitude. The suitability of the design is based on assessing variation in the resulting cone angle. The acceptable cone angle variation would rely on the aerodynamic requirements.

  2. SENSIT: a cross-section and design sensitivity and uncertainty analysis code

    International Nuclear Information System (INIS)

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE

  3. Multitarget global sensitivity analysis of n-butanol combustion.

    Science.gov (United States)

    Zhou, Dingyu D Y; Davis, Michael J; Skodje, Rex T

    2013-05-02

    A model for the combustion of butanol is studied using a recently developed theoretical method for the systematic improvement of the kinetic mechanism. The butanol mechanism includes 1446 reactions, and we demonstrate that it is straightforward and computationally feasible to implement a full global sensitivity analysis incorporating all the reactions. In addition, we extend our previous analysis of ignition-delay targets to include species targets. The combination of species and ignition targets leads to multitarget global sensitivity analysis, which allows for a more complete mechanism validation procedure than we previously implemented. The inclusion of species sensitivity analysis allows for a direct comparison between reaction pathway analysis and global sensitivity analysis.

  4. Sensitivity analysis in multi-parameter probabilistic systems

    International Nuclear Information System (INIS)

    Walker, J.R.

    1987-01-01

    Probabilistic methods involving the use of multi-parameter Monte Carlo analysis can be applied to a wide range of engineering systems. The output from the Monte Carlo analysis is a probabilistic estimate of the system consequence, which can vary spatially and temporally. Sensitivity analysis aims to examine how the output consequence is influenced by the input parameter values. Sensitivity analysis provides the necessary information so that the engineering properties of the system can be optimized. This report details a package of sensitivity analysis techniques that together form an integrated methodology for the sensitivity analysis of probabilistic systems. The techniques have known confidence limits and can be applied to a wide range of engineering problems. The sensitivity analysis methodology is illustrated by performing the sensitivity analysis of the MCROC rock microcracking model

  5. An ESDIRK Method with Sensitivity Analysis Capabilities

    DEFF Research Database (Denmark)

    Kristensen, Morten Rode; Jørgensen, John Bagterp; Thomsen, Per Grove

    2004-01-01

    of the sensitivity equations. A key feature is the reuse of information already computed for the state integration, hereby minimizing the extra effort required for sensitivity integration. Through case studies the new algorithm is compared to an extrapolation method and to the more established BDF based approaches...

  6. Sensitivity Analysis of Fire Dynamics Simulation

    DEFF Research Database (Denmark)

    Brohus, Henrik; Nielsen, Peter V.; Petersen, Arnkell J.

    2007-01-01

    (Morris method). The parameters considered are selected among physical parameters and program specific parameters. The influence on the calculation result as well as the CPU time is considered. It is found that the result is highly sensitive to many parameters even though the sensitivity varies...

  7. Preliminary hazards analysis of thermal scrap stabilization system. Revision 1

    International Nuclear Information System (INIS)

    Lewis, W.S.

    1994-01-01

    This preliminary analysis examined the HA-21I glovebox and its supporting systems for potential process hazards. Upon further analysis, the thermal stabilization system has been installed in gloveboxes HC-21A and HC-21C. The use of HC-21C and HC-21A simplified the initial safety analysis. In addition, these gloveboxes were cleaner and required less modification for operation than glovebox HA-21I. While this document refers to glovebox HA-21I for the hazards analysis performed, glovebox HC-21C is sufficiently similar that the following analysis is also valid for HC-21C. This hazards analysis document is being re-released as revision 1 to include the updated flowsheet document (Appendix C) and the updated design basis (Appendix D). The revised Process Flow Schematic has also been included (Appendix E). This Current revision incorporates the recommendations provided from the original hazards analysis as well. The System Design Description (SDD) has also been appended (Appendix H) to document the bases for Safety Classification of thermal stabilization equipment

  8. Analysis preliminary phytochemical raw extract of leaves Nephrolepis pectinata

    Directory of Open Access Journals (Sweden)

    Natally Marreiros Gomes

    2017-06-01

    Full Text Available The Nephrolepis pectinata popularly known as paulista fern, ladder-heaven, cat tail, belongs to the family Davalliaceae. For the beauty of the arrangements of their leaves ferns are quite commercialized in Brazil, however, have not been described in the literature studies on their pharmacological potential. Thus, the objective of this research was to analyze the phytochemical properties of the crude extract of the leaves of Nephrolepis pectinata. To perform the phytochemical analysis were initially made the collection of the vegetable, preparation of voucher specimen, washing, drying and grinding. Then, extraction by percolation method and end the phytochemical analysis. Preliminary results phytochemicals the crude extract of the leaves of Nephrolepis pectinata tested positive for reducing sugars, phenols/tannins (catechins tannins and catechins.

  9. Preliminary analysis of accident in SST-1 current feeder system

    International Nuclear Information System (INIS)

    Roy, Swati; Kanabar, Deven; Garg, Atul; Singh, Amit; Tanna, Vipul; Prasad, Upendra; Srinivasan, R.

    2017-01-01

    Steady-state Tokamak-1 (SST-1) has 16 superconducting Toroidal field (TF) and 9 superconducting poloidal field (PF) coils rated for 10kA DC. All the TF are connected in series and are operated in DC condition whereas PF coils are individually operated in pulse mode during SST-1 campaigns. SST-1 current feeder system (CFS) houses 9 pairs of PF current leads and 1 pair of TF current leads. During past SST-1 campaign, there were arcing incidents within SST-1 CFS chamber which caused significant damage to PF superconducting current leads as well as its Helium cooling lines of the current leads. This paper brings out the preliminary analysis of the mentioned arcing incident, possible reasons and its investigation thereby laying out the sequence of events. From this analysis and observations, various measures to avoid such arcing incidents have also been proposed. (author)

  10. Preliminary RAMI analysis of WCLL blanket and breeder systems

    International Nuclear Information System (INIS)

    Arroyo, Jose Manuel; Brown, Richard; Harman, Jon; Rosa, Elena; Ibarra, Angel

    2015-01-01

    Highlights: • Preliminary RAMI model for WCLL has been developed. • Critical parts and parameters influencing WCLL availability have been focused. • Necessary developments of tools/models to represent system performance have been identified. - Abstract: DEMO will be a prototype fusion reactor designed to prove the capability to produce electrical power in a commercially acceptable way. One of the key factors in that endeavor is the achievement of certain level of plant availability. Therefore, RAMI (Reliability, Availability, Maintainability and Inspectability) will be a key element in the engineering development of DEMO. Some studies have been started so as to develop the tools and models to assess different design alternatives from RAMI point of view. The main objective of these studies is to be able to evaluate the influence of different parameters on DEMO availability and to focus the critical parts that should be further researched and improved in order to develop a high-availability oriented DEMO design. A preliminary RAMI analysis of the Water Cooled Lithium-Lead (WCLL) blanket and breeder concept for DEMO has been developed. The amounts of single elements that may fail (e.g. more than 180,000 C-shaped tubes) and the mean down time associated to failures inside the vacuum vessel (around 3 months) have been highlighted as the critical parameters influencing the system availability. On the other hand, the necessary developments of tools/models to better represent the system performance have been identified and proposed for future work.

  11. Preliminary RAMI analysis of WCLL blanket and breeder systems

    Energy Technology Data Exchange (ETDEWEB)

    Arroyo, Jose Manuel, E-mail: josemanuel.arroyo@ciemat.es [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Brown, Richard [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon (United Kingdom); Harman, Jon [EFDA Close Support Unit, Garching (Germany); Rosa, Elena; Ibarra, Angel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)

    2015-10-15

    Highlights: • Preliminary RAMI model for WCLL has been developed. • Critical parts and parameters influencing WCLL availability have been focused. • Necessary developments of tools/models to represent system performance have been identified. - Abstract: DEMO will be a prototype fusion reactor designed to prove the capability to produce electrical power in a commercially acceptable way. One of the key factors in that endeavor is the achievement of certain level of plant availability. Therefore, RAMI (Reliability, Availability, Maintainability and Inspectability) will be a key element in the engineering development of DEMO. Some studies have been started so as to develop the tools and models to assess different design alternatives from RAMI point of view. The main objective of these studies is to be able to evaluate the influence of different parameters on DEMO availability and to focus the critical parts that should be further researched and improved in order to develop a high-availability oriented DEMO design. A preliminary RAMI analysis of the Water Cooled Lithium-Lead (WCLL) blanket and breeder concept for DEMO has been developed. The amounts of single elements that may fail (e.g. more than 180,000 C-shaped tubes) and the mean down time associated to failures inside the vacuum vessel (around 3 months) have been highlighted as the critical parameters influencing the system availability. On the other hand, the necessary developments of tools/models to better represent the system performance have been identified and proposed for future work.

  12. Superconducting Accelerating Cavity Pressure Sensitivity Analysis

    International Nuclear Information System (INIS)

    Rodnizki, J.; Horvits, Z.; Ben Aliz, Y.; Grin, A.; Weissman, L.

    2014-01-01

    The measured sensitivity of the cavity was evaluated and it is full consistent with the measured values. It was explored that the tuning system (the fog structure) has a significant contribution to the cavity sensitivity. By using ribs or by modifying the rigidity of the fog we may reduce the HWR sensitivity. During cool down and warming up we have to analyze the stresses on the HWR to avoid plastic deformation to the HWR since the Niobium yield is an order of magnitude lower in room temperature

  13. Derivative based sensitivity analysis of gamma index

    Directory of Open Access Journals (Sweden)

    Biplab Sarkar

    2015-01-01

    Full Text Available Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD and distance-to-agreement (DTA measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm, the point is included in the quantitative score as "pass." Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA against the RP, the first and second order derivatives of the DDs (δD', δD" between these two curves were derived and used as the

  14. MOVES2010a regional level sensitivity analysis

    Science.gov (United States)

    2012-12-10

    This document discusses the sensitivity of various input parameter effects on emission rates using the US Environmental Protection Agencys (EPAs) MOVES2010a model at the regional level. Pollutants included in the study are carbon monoxide (CO),...

  15. Sensitivity Analysis of FEAST-Metal Fuel Performance Code: Initial Results

    International Nuclear Information System (INIS)

    Edelmann, Paul Guy; Williams, Brian J.; Unal, Cetin; Yacout, Abdellatif

    2012-01-01

    This memo documents the completion of the LANL milestone, M3FT-12LA0202041, describing methodologies and initial results using FEAST-Metal. The FEAST-Metal code calculations for this work are being conducted at LANL in support of on-going activities related to sensitivity analysis of fuel performance codes. The objective is to identify important macroscopic parameters of interest to modeling and simulation of metallic fuel performance. This report summarizes our preliminary results for the sensitivity analysis using 6 calibration datasets for metallic fuel developed at ANL for EBR-II experiments. Sensitivity ranking methodology was deployed to narrow down the selected parameters for the current study. There are approximately 84 calibration parameters in the FEAST-Metal code, of which 32 were ultimately used in Phase II of this study. Preliminary results of this sensitivity analysis led to the following ranking of FEAST models for future calibration and improvements: fuel conductivity, fission gas transport/release, fuel creep, and precipitation kinetics. More validation data is needed to validate calibrated parameter distributions for future uncertainty quantification studies with FEAST-Metal. Results of this study also served to point out some code deficiencies and possible errors, and these are being investigated in order to determine root causes and to improve upon the existing code models.

  16. Preliminary analysis of biomass potentially useful for producing biodiesel

    International Nuclear Information System (INIS)

    Cabrera Cifuentes, Gerardo; Burbano Jaramillo, Juan Carlos; Garcia Melo, Jose Isidro

    2011-01-01

    Given that biodiesel is emerging as a viable solution for some energy and environmental problems, research on raw materials appropriate for its production is a matter of growing interest. In this study we present the results of research devoted to preliminary analysis on several vegetable (biomass) species potentially useful for producing biodiesel. The bioprospection zone is a region on the Colombian Pacific coast. The candidate species collected underwent different standardized ASTM tests in order for us to define properties that facilitate their evaluation. Some of the species underwent a transesterification process. Comparisons between the thermo-physical properties of the biofuels obtained and the properties of commercial diesel were carried out. Also, performance tests for these biofuels were conducted in compression ignition engines, particularly evaluating efficiency, fuel consumption, and potency at different RPMs.

  17. Preliminary radar systems analysis for Venus orbiter missions

    Science.gov (United States)

    Brandenburg, R. K.; Spadoni, D. J.

    1971-01-01

    A short, preliminary analysis is presented of the problems involved in mapping the surface of Venus with radar from an orbiting spacecraft. Two types of radar, the noncoherent sidelooking and the focused synthetic aperture systems, are sized to fulfill two assumed levels of Venus exploration. The two exploration levels, regional and local, assumed for this study are based on previous Astro Sciences work (Klopp 1969). The regional level is defined as 1 to 3 kilometer spatial and 0.5 to 1 km vertical resolution of 100 percent 0 of the planet's surface. The local level is defined as 100 to 200 meter spatial and 50-10 m vertical resolution of about 100 percent of the surfAce (based on the regional survey). A 10cm operating frequency was chosen for both radar systems in order to minimize the antenna size and maximize the apparent radar cross section of the surface.

  18. ERRFILS: a preliminary library of 30-group multigroup covariance data for use in CTR sensitivity studies

    International Nuclear Information System (INIS)

    LaBauve, R.J.; Muir, D.W.

    1978-01-01

    A library of 30-group multigroup covariance data was prepared from preliminary ENDF/B-V data with the NJOY code. Data for Fe, Cr, Ni, 10 B, C, Cu, H, and Pb are included in this library. Reactions include total cross sections, elastic and inelastic scattering cross sections, and the most important absorption cross sections. Typical data from the file are shown. 3 tables

  19. NPV Sensitivity Analysis: A Dynamic Excel Approach

    Science.gov (United States)

    Mangiero, George A.; Kraten, Michael

    2017-01-01

    Financial analysts generally create static formulas for the computation of NPV. When they do so, however, it is not readily apparent how sensitive the value of NPV is to changes in multiple interdependent and interrelated variables. It is the aim of this paper to analyze this variability by employing a dynamic, visually graphic presentation using…

  20. Sensitivity Analysis for Multidisciplinary Systems (SAMS)

    Science.gov (United States)

    2016-12-01

    release. Distribution is unlimited. 14 Server and Client Code Server from geometry import Point, Geometry import math import zmq class Server...public release; Distribution is unlimited. DISTRIBUTION STATEMENT A: Approved for public release. Distribution is unlimited. 19 Example Application Boeing...Materials Conference, 2011. Cross, D. M., Local continuum sensitivity method for shape design derivatives using spatial gradient reconstruction. Diss

  1. Extended forward sensitivity analysis of one-dimensional isothermal flow

    International Nuclear Information System (INIS)

    Johnson, M.; Zhao, H.

    2013-01-01

    Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)

  2. Sensitivity of permanent meadows areas. Preliminary report; Sensibilite des zones de prairies permanentes. Rapport preliminaire

    Energy Technology Data Exchange (ETDEWEB)

    Pourcelot, L

    2006-07-01

    The objective of this work proposed in the framework of the S.E.N.S.I.B. project on the permanent meadows areas is to compare the sensitivity of these surfaces and cheese maker sector which are associated with them, and to identify the processes which determine this sensitivity. The realization of this objective will lean on certain data of activities acquired recently by the I.R.S.N. (compartments: soils, herb, milk and cheese), as well as on new measures, on three sites: Saint-Laurent-de-Geris ( West), Beaune-Le-Froid ( center), and Massif of Jura (East). Two scales of transfers observation and sensitivity are retained. In a first time, the sensitivity of meadows areas will studied at the regional scale, from data acquired in the Massif du Jura ( between 300 and 1200 metres up). In a second time, a study will allow to compare the sensitivity of three areas retained (West, Center, East) at the national scale. The expertise will be focused on the factors of sensitivity of the transfer soil-herb and the factors of sensitivity bound to the cheese makers sectors. The data of the already available zones of study create strong variabilities of the rates of {sup 137}Cs transfer in the interfaces soil / herb and herb / milk, letting suppose that the nature of soils, the quantity and the quality of food ingested by the cattle constitutes dominating factors of sensitivity. Besides, the size of the basin of milk collection, extremely variable from a site of study in the other one, has to influence the contamination of cheeses and as such, it establishes an important factor of sensitivity of the cheese makers sector. (N.C.)

  3. The role of sensitivity analysis in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Hirschberg, S.; Knochenhauer, M.

    1987-01-01

    The paper describes several items suitable for close examination by means of application of sensitivity analysis, when performing a level 1 PSA. Sensitivity analyses are performed with respect to; (1) boundary conditions, (2) operator actions, and (3) treatment of common cause failures (CCFs). The items of main interest are identified continuously in the course of performing a PSA, as well as by scrutinising the final results. The practical aspects of sensitivity analysis are illustrated by several applications from a recent PSA study (ASEA-ATOM BWR 75). It is concluded that sensitivity analysis leads to insights important for analysts, reviewers and decision makers. (orig./HP)

  4. Automated sensitivity analysis using the GRESS language

    International Nuclear Information System (INIS)

    Pin, F.G.; Oblow, E.M.; Wright, R.Q.

    1986-04-01

    An automated procedure for performing large-scale sensitivity studies based on the use of computer calculus is presented. The procedure is embodied in a FORTRAN precompiler called GRESS, which automatically processes computer models and adds derivative-taking capabilities to the normal calculated results. In this report, the GRESS code is described, tested against analytic and numerical test problems, and then applied to a major geohydrological modeling problem. The SWENT nuclear waste repository modeling code is used as the basis for these studies. Results for all problems are discussed in detail. Conclusions are drawn as to the applicability of GRESS in the problems at hand and for more general large-scale modeling sensitivity studies

  5. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...

  6. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    Science.gov (United States)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  7. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    Science.gov (United States)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  8. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  9. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the

  10. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Directory of Open Access Journals (Sweden)

    Georgios Arampatzis

    Full Text Available Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of

  11. Narrative foreclosure in later life: Preliminary considerations for a new sensitizing concept

    NARCIS (Netherlands)

    Bohlmeijer, Ernst Thomas; Westerhof, Gerben Johan; Randall, W.; Tromp, T.; Kenyon, G.

    2011-01-01

    The objective of the paper is to explore narrative foreclosure as a sensitizing concept for studying the ways in which narrative identity development falters in later life. Two main characters in famous movies are contrasted to provide a better understanding of narrative foreclosure. The concept is

  12. Sensitivity of the Addiction Severity Index physical and sexual assault items: preliminary findings on gender differences

    NARCIS (Netherlands)

    Langeland, W.; van den Brink, W.; Draijer, N.; Hartgers, C.

    2001-01-01

    Evaluation of the Addiction Severity Index (ASI) as a screen for identifying sexual and physical assault histories. The sensitivity and specificity of the ASI assault items were examined in 146 alcoholic patients with the assault questions of the Composite International Diagnostic Interview

  13. Preliminary Analysis of a Submerged Wave Energy Device

    Science.gov (United States)

    Wagner, J. R.; Wagner, J. J.; Hayatdavoodi, M.; Ertekin, R. C.

    2016-02-01

    Preliminary analysis of a submerged wave energy harvesting device is presented. The device is composed of a thin, horizontally submerged plate that is restricted to heave oscillations under the influence of surface waves. The submerged plate is oscillating, and it can be attached to a fixed rotor, or a piston, to harvest the wave energy. A fully submerged wave energy converter is preferred over a surface energy convertor due to its durability and less visual and physical distractions it presents. In this study, the device is subject to nonlinear shallow-water waves. Wave loads on the submerged oscillating plate are obtained via the Level I Green-Naghdi equations. The unsteady motion of the plate is obtained by solving the nonlinear equations of motion. The results are obtained for a range of waves with varying heights and periods. The amplitude and period of plate oscillations are analyzed as functions of the wave parameters and plate width. Particular attention is given to the selection of the site of desired wave field. Initial estimation on the amount of energy extraction from the device, located near shore at a given site, is provided.

  14. Preliminary radiation criteria and nuclear analysis for ETF

    International Nuclear Information System (INIS)

    Engholm, B.A.

    1980-09-01

    Preliminary biological and materials radiation dose criteria for the Engineering Test Facility are described and tabulated. In keeping with the ETF Mission Statement, a key biological dose criterion is a 24-hour shutdown dose rate of 2 mrem/hr on the surface of the outboard bulk shield. Materials dose criteria, which primarily govern the inboard shield design, include 10 9 rads exposure limit to epoxy insulation, 3 x 10 -4 dpa damage to the TF coil copper stabilizer, and a total nuclear heating rate of 5 kW in the inboard TF coils. Nuclear analysis performed during FY 80 was directed primarily at the inboard and outboard bulk shielding, and at radiation streaming in the neutral beam drift ducts. Inboard and outboard shield thicknesses to achieve the biological and materials radiation criteria are 75 cm inboard and 125 cm outboard, the configuration consisting of alternating layers of stainless steel and borated water. The outboard shield also includes a 5 cm layer of lead. NBI duct streaming analyses performed by ORNL and LASL will play a key role in the design of the duct and NBI shielding in FY 81. The NBI aluminum cryopanel nuclear heating rate during the heating cycle is about 1 milliwatt/cm 3 , which is far less than the permissible limit

  15. Preliminary Analysis For Wolsong Par Effects Using ISACC Calculations

    International Nuclear Information System (INIS)

    Song, Yong Mann; Kim, Dong Ha

    2012-01-01

    In the paper, hydrogen control effects using PARs only are analyzed for severe SBO station blackout (SBO) sequences beyond the design basis accidents in WS-1 which are of CANDU6 type reactor. As a computational tool, the latest version of ISAAC4.3 (Integrated Severe Accident Analysis Code for CANDU), which is a fully integrated and lumped severe accident computer code, is used to simulate hydrogen generation and transport inside the reactor building (R/B) before its failure. For the performance of hydrogen removal, the depletion rate equation of K-PAR developed in Korea is applied. In a CANDU reactor, three areas are identified as sources of hydrogen under severe accidents: fuel-coolant interactions in intact channels, suspended fuel or debris interactions in-calandria tank and debris interactions in-calandria vault. The first two origins provide source for the late ('late' terminology is used because it takes more than one day before calandria tank failure) potential hydrogen combustion before calandria tank failure and all the three origins would provide source for the very late potential hydrogen combustion occurring at or after calaria tank failure. If the hydrogen mitigation system fails, the AICC (adiabatic isochoric complete combustion) burning of highly flammable hydrogen may cause Wolsong R/B failure. So hydrogen induced failure possibility is evaluated, using preliminary ISAAC calculations, under several SBO conditions with and without PAR for both late and very late accident periods

  16. Preliminary analysis of accelerated space flight ionizing radiation testing

    Science.gov (United States)

    Wilson, J. W.; Stock, L. V.; Carter, D. J.; Chang, C. K.

    1982-01-01

    A preliminary analysis shows that radiation dose equivalent to 30 years in the geosynchronous environment can be accumulated in a typical composite material exposed to space for 2 years or less onboard a spacecraft orbiting from perigee of 300 km out to the peak of the inner electron belt (approximately 2750 km). Future work to determine spacecraft orbits better tailored to materials accelerated testing is indicated. It is predicted that a range of 10 to the 9th power to 10 to the 10th power rads would be accumulated in 3-6 mil thick epoxy/graphite exposed by a test spacecraft orbiting in the inner electron belt. This dose is equivalent to the accumulated dose that this material would be expected to have after 30 years in a geosynchronous orbit. It is anticipated that material specimens would be brought back to Earth after 2 years in the radiation environment so that space radiation effects on materials could be analyzed by laboratory methods.

  17. Investigation of Sorption and Diffusion Mechanisms, and Preliminary Economic Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bhave, Ramesh R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jubin, Robert Thomas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Spencer, Barry B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nair, Sankar [Georgia Inst. of Technology, Atlanta, GA (United States)

    2017-02-01

    This report describes the synthesis and evaluation of molecular sieve zeolite membranes to separate and concentrate tritiated water (HTO) from dilute HTO-bearing aqueous streams. Several monovalent and divalent cation exchanged silico alumino phosphate (SAPO-34) molecular sieve zeolite membranes were synthesized on disk supports and characterized with gas and vapor permeation measurements. The pervaporation process performance was evaluated for the separation and concentration of tritiated water. Experiments were performed using tritiated water feed solution containing tritium at the high end of the range (1 mCi/mL) anticipated in a nuclear fuel processing system that includes both acid and water streams recycling. The tritium concentration was about 0.1 ppm. The permeate was recovered under vacuum. The HTO/H2O selectivity and separation factor calculated from the measured tritium concentrations ranged from 0.99 to 1.23, and 0.83-0.98, respectively. Although the membrane performance for HTO separation was lower than expected, several encouraging observations including molecular sieving and high vapor permeance are reported. Additionally, several new approaches are proposed, such as tuning the sorption and diffusion properties offered by small pore LTA zeolite materials, and cation exchanged aluminosilicates with high metal loading. It is hypothesized that substantially improved preferential transport of tritium (HTO) resulting in a more concentrated permeate can be achieved. Preliminary economic analysis for the membrane-based process to concentrate tritiated water is also discussed.

  18. Cost risk analysis of radioactive waste management Preliminary study

    International Nuclear Information System (INIS)

    Forsstroem, J.

    2006-12-01

    This work begins with exposition of the basics of risk analysis. These basics are then applied to the Finnish radioactive waste disposal environment in which the nuclear power companies are responsible for all costs of radioactive waste management including longterm disposal of spent fuel. Nuclear power companies prepare cost estimates of the waste disposal on a yearly basis to support the decision making on accumulation of resources to the nuclear waste disposal fund. These cost estimates are based on the cost level of the ongoing year. A Monte Carlo simulation model of the costs of the waste disposal system was defined and it was used to produce preliminary results of its cost risk characteristics. Input data was synthesised by modifying the original coefficients of cost uncertainty to define a cost range for each cost item. This is a suitable method for demonstrating results obtainable by the model but it is not accurate enough for supporting decision making. Two key areas of further development were identified: the input data preparation and identifying and handling of (i.e. eliminating or merging) interacting cost elements in the simulation model. Further development in both of the mentioned areas can be carried out by co-operating with the power companies as they are the sources of the original data. (orig.)

  19. Sensitivity Analysis Based on Markovian Integration by Parts Formula

    Directory of Open Access Journals (Sweden)

    Yongsheng Hang

    2017-10-01

    Full Text Available Sensitivity analysis is widely applied in financial risk management and engineering; it describes the variations brought by the changes of parameters. Since the integration by parts technique for Markov chains is well developed in recent years, in this paper we apply it for computation of sensitivity and show the closed-form expressions for two commonly-used time-continuous Markovian models. By comparison, we conclude that our approach outperforms the existing technique of computing sensitivity on Markovian models.

  20. Advanced Fuel Cycle Economic Sensitivity Analysis

    Energy Technology Data Exchange (ETDEWEB)

    David Shropshire; Kent Williams; J.D. Smith; Brent Boore

    2006-12-01

    A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.

  1. Grid-connected ICES: preliminary feasibility analysis and evaluation. Volume 2. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1977-06-30

    The HEAL Complex in New Orleans will serve as a Demonstration Community for which the ICES Demonstration System will be designed. The complex is a group of hospitals, clinics, research facilities, and medical educational facilities. The five tasks reported on are: preliminary energy analysis; preliminary institutional assessment; conceptual design; firming-up of commitments; and detailed work management plan.

  2. The role of sensitivity analysis in assessing uncertainty

    International Nuclear Information System (INIS)

    Crick, M.J.; Hill, M.D.

    1987-01-01

    Outside the specialist world of those carrying out performance assessments considerable confusion has arisen about the meanings of sensitivity analysis and uncertainty analysis. In this paper we attempt to reduce this confusion. We then go on to review approaches to sensitivity analysis within the context of assessing uncertainty, and to outline the types of test available to identify sensitive parameters, together with their advantages and disadvantages. The views expressed in this paper are those of the authors; they have not been formally endorsed by the National Radiological Protection Board and should not be interpreted as Board advice

  3. Preliminary safety analysis of molten salt breeder reactor

    International Nuclear Information System (INIS)

    Cheng Maosong; Dai Zhimin

    2013-01-01

    Background: The molten salt reactor is one of the six advanced reactor concepts identified by the Generation IV International Forum as a candidate for cooperative development, which is characterized by remarkable advantages in inherent safety, fuel cycle, miniaturization, effective utilization of nuclear resources and proliferation resistance. ORNL finished the conceptual design of Molten Salt Breeder Reactor (MSBR) based on the design, building and operation of Molten Salt Reactor Experiment (MSRE). Purpose: We attempt to implement the preliminary safety analysis of MSBR in order to provide a reference for the design and optimization of MSBR in the future. Methods: According to the conceptual design of MSBR, a model of safety analysis using point kinetics coupled with the simplified heat transfer mechanism is presented. The model is applied to simulate the transient phenomena of MSBR initiated by an abnormal step reactivity addition and an abnormal ramp reactivity addition at full-power equilibrium condition. Results: The thermal power in the core increases rapidly at the beginning and is accompanied by a rise of the fuel and graphite temperatures after 100, 300, 500 and 600 pcm reactivity addition. The maximum outlet temperature of the fuel in the core is at 1250℃ in 500 pcm reactivity addition, but up to 1350℃ in 600 pcm reactivity addition. The maximum of the power and the temperature are delayed and lower in the ramp reactivity addition rather than in the step reactivity addition. Conclusions: Based on the results, when the reactivity inserted is less than 500 pcm in maximum at full power equilibrium condition, the structural material in Hastelloy-N is not melted and can keep integrity without external control action. And it is necessary to try to avoid inserting a reactivity at short time. (authors)

  4. Conversion Preliminary Safety Analysis Report for the NIST Research Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Diamond, D. J. [Brookhaven National Lab. (BNL), Upton, NY (United States); Baek, J. S. [Brookhaven National Lab. (BNL), Upton, NY (United States); Hanson, A. L. [Brookhaven National Lab. (BNL), Upton, NY (United States); Cheng, L-Y [Brookhaven National Lab. (BNL), Upton, NY (United States); Brown, N. [Brookhaven National Lab. (BNL), Upton, NY (United States); Cuadra, A. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2015-01-30

    The NIST Center for Neutron Research (NCNR) is a reactor-laboratory complex providing the National Institute of Standards and Technology (NIST) and the nation with a world-class facility for the performance of neutron-based research. The heart of this facility is the NIST research reactor (aka NBSR); a heavy water moderated and cooled reactor operating at 20 MW. It is fueled with high-enriched uranium (HEU) fuel elements. A Global Threat Reduction Initiative (GTRI) program is underway to convert the reactor to low-enriched uranium (LEU) fuel. This program includes the qualification of the proposed fuel, uranium and molybdenum alloy foil clad in an aluminum alloy, and the development of the fabrication techniques. This report is a preliminary version of the Safety Analysis Report (SAR) that would be submitted to the U.S. Nuclear Regulatory Commission (NRC) for approval prior to conversion. The report follows the recommended format and content from the NRC codified in NUREG-1537, “Guidelines for Preparing and Reviewing Applications for the Licensing of Non-power Reactors,” Chapter 18, “Highly Enriched to Low-Enriched Uranium Conversions.” The emphasis in any conversion SAR is to explain the differences between the LEU and HEU cores and to show the acceptability of the new design; there is no need to repeat information regarding the current reactor that will not change upon conversion. Hence, as seen in the report, the bulk of the SAR is devoted to Chapter 4, Reactor Description, and Chapter 13, Safety Analysis.

  5. Analysis of Sensitivity Experiments - An Expanded Primer

    Science.gov (United States)

    2017-03-08

    conducted with this purpose in mind. Due diligence must be paid to the structure of the dosage levels and to the number of trials. The chosen data...analysis. System reliability is of paramount importance for protecting both the investment of funding and human life . Failing to accurately estimate

  6. Sensitivity analysis of hybrid thermoelastic techniques

    Science.gov (United States)

    W.A. Samad; J.M. Considine

    2017-01-01

    Stress functions have been used as a complementary tool to support experimental techniques, such as thermoelastic stress analysis (TSA) and digital image correlation (DIC), in an effort to evaluate the complete and separate full-field stresses of loaded structures. The need for such coupling between experimental data and stress functions is due to the fact that...

  7. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  8. Study on the performances of an absolute atomic gravimeter: limit sensitivity and preliminary accuracy

    International Nuclear Information System (INIS)

    Le Gouet, J.

    2008-02-01

    Atom interferometry is applied to absolute measurement of gravity acceleration g, to provide an accurate value for the realization of the LNE watt balance. The atomic source is obtained from a cloud of cold 87 Rubidium atoms. Two vertical counter-propagating are used to generate stimulated Raman transitions, that separate the wave-packets and make them interfere. During the transitions, the phase difference between the beams is printed on the phase of the free-falling atoms. Then the atomic phase shift between the two vertical paths becomes sensitive to the atom acceleration and allows obtaining an accurate value of g. A part of this manuscript is dedicated to the study of noise sources which deteriorate the measurement sensitivity. In particular, we detail the vibrations contribution, which we are able to reduce by a factor of 3 to 10, depending on the configurations, thanks to the measurement of a seismometer and to its analog processing. The best reported sensitivity, in optimal environment, is 1.4*10 -8 g/Hz 1/2 . The study of the measurement accuracy also represents an important part of this work. Although the vacuum chamber was only temporary, we started to list the systematic shifts. According to two comparisons with well-known absolute gravimeters based on optical interferometry, our measurement shows a residual bias of 16*10 -9 g. (author)

  9. Hair dye dermatitis and p-phenylenediamine contact sensitivity: A preliminary report

    Directory of Open Access Journals (Sweden)

    Mrinal Gupta

    2015-01-01

    Full Text Available Background: The contact allergic reactions from p-phenylenediamine (PPD in hair dyes vary from mild contact dermatitis to severe life- threatening events (angioedema, bronchospasm, asthma, renal impairment. Objectives: To study the clinical patterns and PPD contact sensitivity in patients with hair-dye dermatitis. Materials and Methods: Eighty (M: F 47:33 consecutive patients aged between 18 and 74 years suspected to have contact allergy from hair dye were studied by patch testing with Indian Standard Series including p-phenylenediamine (PPD, 1.0% pet. Results: 54 Fifty-four (M: F 21:33 patients showed positive patch tests from PPD. Eight of these patients also showed positive patch test reaction from fragrance mix, thiuram mix, paraben mix, or colophony. Fifty-seven (71% patients affected were aged older than 40 years. The duration of dermatitis varied from 1 year with exacerbation following hair coloring. Forty-nine patients had dermatitis of scalp and/or scalp margins and 23 patients had face and neck dermatitis. Periorbital dermatitis, chronic actinic dermatitis, and erythema multiforme-like lesions were seen in 4, 2, and 1 patients, respectively. Conclusions: Hair dyes and PPD constitute a significant cause of contact dermatitis. There is an urgent need for creating consumer awareness regarding hair-dyes contact sensitivity and the significance of performing sensitivity testing prior to actual use.

  10. Preliminary comparative proteomics study of cervical carcinoma tissues with different sensitivity to concurrent chemoradiotherapy

    International Nuclear Information System (INIS)

    Zhu Hong; Liao Yuping; Zeng Liang; Xiao Zhiqiang

    2008-01-01

    Objective: To investigate the proteomics differences between the high-sensitivity(HS) and the low-sensitivity(LS) groups of cervical carcinoma treated by concurrent chemoradiotherapy, and to confirm the sensitivity associated proteins in intermediate stage and advanced cervical carcinoma. Methods: Fresh carcinoma tissues were collected from 10 untreated cervical carcinoma patients. According to the response to concurrent chemoradiotherapy, the tissues were classified into HS group and LS group. In the first part of our experiment, protein separation was performed using two-dimensional gel electrophoresis (2-DE) with Amersham 18 cm linear pH 3-10 immobilized pH gradient(IPG) strips. The images of the gels were analyzed by PD-quest 7.0 software to find the differentially expressed protein-spots in each group. Then the differentially expressed protein-spots were incised from the gels and digested by trypsin. The peptide mass fingerprintings (PMF) was acquired by matrix assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS). The proteins were identified by data searched in the Mascot-database. Two differentially expressed proteins were assayed by western blot and immunohistochemical methods. Results: Most of the gels were clear and successfully analyzed by PD-quest 7.0 software. Most of the protein-spots concentrated on the area of 20-100 KDa(Mw) and pH4-8. The average number of the protein-spots was 781 ± 74 in HS group and 766 ± 52 in LS group. The match rate was 87.6% between the two groups. Eight proteins highly in HS group but lowly expressed in LS group included hemoglobin subunit beta, caspase-14 precursor, calmodulindike, S100-A9 protein(MRP-14), galectin-7, HSKERC4, keratin 19 and actin. Ten proteins highly in LS group but lowly expression in HS group included anti HBs antibody light-chain Fab, lamin-B1, WARS protein, flavin reductase, glutamate dehydrogenase 1, nuclear matrix protein 238, retinal dehydrogenase 1, AF165172

  11. Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics

    DEFF Research Database (Denmark)

    Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter

    2014-01-01

    We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, High-Dimensional Model Representation and Total Sensitivity Indices. The approach has a general applic...

  12. Global and Local Sensitivity Analysis Methods for a Physical System

    Science.gov (United States)

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  13. Adjoint sensitivity analysis of high frequency structures with Matlab

    CERN Document Server

    Bakr, Mohamed; Demir, Veysel

    2017-01-01

    This book covers the theory of adjoint sensitivity analysis and uses the popular FDTD (finite-difference time-domain) method to show how wideband sensitivities can be efficiently estimated for different types of materials and structures. It includes a variety of MATLAB® examples to help readers absorb the content more easily.

  14. A Preliminary Tsunami vulnerability analysis for Bakirkoy district in Istanbul

    Science.gov (United States)

    Tufekci, Duygu; Lutfi Suzen, M.; Cevdet Yalciner, Ahmet; Zaytsev, Andrey

    2016-04-01

    Resilience of coastal utilities after earthquakes and tsunamis has major importance for efficient and proper rescue and recovery operations soon after the disasters. Vulnerability assessment of coastal areas under extreme events has major importance for preparedness and development of mitigation strategies. The Sea of Marmara has experienced numerous earthquakes as well as associated tsunamis. There are variety of coastal facilities such as ports, small craft harbors, and terminals for maritime transportation, water front roads and business centers mainly at North Coast of Marmara Sea in megacity Istanbul. A detailed vulnerability analysis for Yenikapi region and a detailed resilience analysis for Haydarpasa port in Istanbul have been studied in previously by Cankaya et al., (2015) and Aytore et al., (2015) in SATREPS project. In this study, the methodology of vulnerability analysis under tsunami attack given in Cankaya et al., (2015) is modified and applied to Bakirkoy district of Istanbul. Bakirkoy district is located at western part of Istanbul and faces to the North Coast of Marmara Sea from 28.77oE to 28.89oE. High resolution spatial dataset of Istanbul Metropolitan Municipality (IMM) is used and analyzed. The bathymetry and topography database and the spatial dataset containing all buildings/structures/infrastructures in the district are collated and utilized for tsunami numerical modeling and following vulnerability analysis. The tsunami parameters from deterministically defined worst case scenarios are computed from the simulations using tsunami numerical model NAMI DANCE. The vulnerability assessment parameters in the district according to vulnerability and resilience are defined; and scored by implementation of a GIS based TVA with appropriate MCDA methods. The risk level is computed using tsunami intensity (level of flow depth from simulations) and TVA results at every location in Bakirkoy district. The preliminary results are presented and discussed

  15. Preliminary Analysis of a Loss of Condenser Vacuum Accident Using the MARS-KS Code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jieun Kim; Bang, Young Seok; Oh, Deog Yeon; Kim, Kap; Woo, Sweng-Wong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2015-05-15

    In accordance with revision of NUREG-0800 of USNRC, the area of review for loss of condenser vacuum(LOCV) accident has been expanded to analyze both peak pressures of primary and secondary system separately. Currently, the analysis of LOCV accident, which is caused by malfunction of condenser, has been focused to fuel cladding integrity and peak pressure in the primary system. In this paper, accident analysis for LOCV using MARS-KS code were conducted to support the licensing review on transient behavior of secondary system pressure of APR1400 plant. The accident analysis for the loss of condenser vacuum (LOCV) of APR1400 was conducted with the MARS-KS code to support the review on the pressure behavior of primary and secondary system. Total four cases which have different combination of availability of offsite power and the pressurizer spray are considered. The preliminary analysis results shows that the initial conditions or assumptions which concludes the severe consequence are different for each viewpoint, and in some cases, it could be confront with each viewpoint. Therefore, with regard to the each acceptance criteria, figuring out and sensitivity analysis of the initial conditions and assumptions for system pressure would be necessary.

  16. Dispersion sensitivity analysis & consistency improvement of APFSDS

    Directory of Open Access Journals (Sweden)

    Sangeeta Sharma Panda

    2017-08-01

    In Bore Balloting Motion simulation shows that reduction in residual spin by about 5% results in drastic 56% reduction in first maximum yaw. A correlation between first maximum yaw and residual spin is observed. Results of data analysis are used in design modification for existing ammunition. Number of designs are evaluated numerically before freezing five designs for further soundings. These designs are critically assessed in terms of their comparative performance during In-bore travel & external ballistics phase. Results are validated by free flight trials for the finalised design.

  17. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    Science.gov (United States)

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  18. Sensitivity analysis of the RESRAD, a dose assessment code

    International Nuclear Information System (INIS)

    Yu, C.; Cheng, J.J.; Zielen, A.J.

    1991-01-01

    The RESRAD code is a pathway analysis code that is designed to calculate radiation doses and derive soil cleanup criteria for the US Department of Energy's environmental restoration and waste management program. the RESRAD code uses various pathway and consumption-rate parameters such as soil properties and food ingestion rates in performing such calculations and derivations. As with any predictive model, the accuracy of the predictions depends on the accuracy of the input parameters. This paper summarizes the results of a sensitivity analysis of RESRAD input parameters. Three methods were used to perform the sensitivity analysis: (1) Gradient Enhanced Software System (GRESS) sensitivity analysis software package developed at oak Ridge National Laboratory; (2) direct perturbation of input parameters; and (3) built-in graphic package that shows parameter sensitivities while the RESRAD code is operational

  19. A sensitivity analysis approach to optical parameters of scintillation detectors

    International Nuclear Information System (INIS)

    Ghal-Eh, N.; Koohi-Fayegh, R.

    2008-01-01

    In this study, an extended version of the Monte Carlo light transport code, PHOTRACK, has been used for a sensitivity analysis to estimate the importance of different wavelength-dependent parameters in the modelling of light collection process in scintillators

  20. sensitivity analysis on flexible road pavement life cycle cost model

    African Journals Online (AJOL)

    user

    of sensitivity analysis on a developed flexible pavement life cycle cost model using varying discount rate. The study .... organizations and specific projects needs based. Life-cycle ... developed and completed urban road infrastructure corridor ...

  1. Sobol’ sensitivity analysis for stressor impacts on honeybee colonies

    Science.gov (United States)

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather...

  2. Experimental Design for Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2001-01-01

    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as

  3. Sensitivity analysis of a greedy heuristic for knapsack problems

    NARCIS (Netherlands)

    Ghosh, D; Chakravarti, N; Sierksma, G

    2006-01-01

    In this paper, we carry out parametric analysis as well as a tolerance limit based sensitivity analysis of a greedy heuristic for two knapsack problems-the 0-1 knapsack problem and the subset sum problem. We carry out the parametric analysis based on all problem parameters. In the tolerance limit

  4. Sensitivity analysis of numerical solutions for environmental fluid problems

    International Nuclear Information System (INIS)

    Tanaka, Nobuatsu; Motoyama, Yasunori

    2003-01-01

    In this study, we present a new numerical method to quantitatively analyze the error of numerical solutions by using the sensitivity analysis. If a reference case of typical parameters is one calculated with the method, no additional calculation is required to estimate the results of the other numerical parameters such as more detailed solutions. Furthermore, we can estimate the strict solution from the sensitivity analysis results and can quantitatively evaluate the reliability of the numerical solution by calculating the numerical error. (author)

  5. Risk Assessment of Healthcare Waste by Preliminary Hazard Analysis Method

    Directory of Open Access Journals (Sweden)

    Pouran Morovati

    2017-09-01

    Full Text Available Introduction and purpose: Improper management of healthcare waste (HCW can pose considerable risks to human health and the environment and cause serious problems in developing countries such as Iran. In this study, we sought to determine the hazards of HCW in the public hospitals affiliated to Abadan School of Medicine using the preliminary hazard analysis (PHA method. Methods: In this descriptive and analytic study, health risk assessment of HCW in government hospitals affiliated to Abadan School of Medicine (4 public hospitals was carried out by using PHA in the summer of  2016. Results: We noted the high risk of sharps and infectious wastes. Considering the dual risk of injury and disease transmission, sharps were classified in the very high-risk group, and pharmaceutical and chemical and radioactive wastes were classified in the medium-risk group. Sharps posed the highest risk, while pharmaceutical and chemical wastes had the lowest risk. Among the various stages of waste management, the waste treatment stage was the most hazardous in all the studied hospitals. Conclusion: To diminish the risks associated with healthcare waste management in the studied hospitals, adequate training of healthcare workers and care providers, provision of suitable personal protective and transportation equipment, and supervision of the environmental health manager of hospitals should be considered by the authorities.  

  6. Behavioral metabolomics analysis identifies novel neurochemical signatures in methamphetamine sensitization

    Science.gov (United States)

    Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.

    2014-01-01

    Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544

  7. High sensitivity analysis of atmospheric gas elements

    International Nuclear Information System (INIS)

    Miwa, Shiro; Nomachi, Ichiro; Kitajima, Hideo

    2006-01-01

    We have investigated the detection limit of H, C and O in Si, GaAs and InP using a Cameca IMS-4f instrument equipped with a modified vacuum system to improve the detection limit with a lower sputtering rate We found that the detection limits for H, O and C are improved by employing a primary ion bombardment before the analysis. Background levels of 1 x 10 17 atoms/cm 3 for H, of 3 x 10 16 atoms/cm 3 for C and of 2 x 10 16 atoms/cm 3 for O could be achieved in silicon with a sputtering rate of 2 nm/s after a primary ion bombardment for 160 h. We also found that the use of a 20 K He cryo-panel near the sample holder was effective for obtaining better detection limits in a shorter time, although the final detection limits using the panel are identical to those achieved without it

  8. High sensitivity analysis of atmospheric gas elements

    Energy Technology Data Exchange (ETDEWEB)

    Miwa, Shiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan)]. E-mail: Shiro.Miwa@jp.sony.com; Nomachi, Ichiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan); Kitajima, Hideo [Nanotechnos Corp., 5-4-30 Nishihashimoto, Sagamihara 229-1131 (Japan)

    2006-07-30

    We have investigated the detection limit of H, C and O in Si, GaAs and InP using a Cameca IMS-4f instrument equipped with a modified vacuum system to improve the detection limit with a lower sputtering rate We found that the detection limits for H, O and C are improved by employing a primary ion bombardment before the analysis. Background levels of 1 x 10{sup 17} atoms/cm{sup 3} for H, of 3 x 10{sup 16} atoms/cm{sup 3} for C and of 2 x 10{sup 16} atoms/cm{sup 3} for O could be achieved in silicon with a sputtering rate of 2 nm/s after a primary ion bombardment for 160 h. We also found that the use of a 20 K He cryo-panel near the sample holder was effective for obtaining better detection limits in a shorter time, although the final detection limits using the panel are identical to those achieved without it.

  9. Sensitivity Analysis of BLISK Airfoil Wear †

    Directory of Open Access Journals (Sweden)

    Andreas Kellersmann

    2018-05-01

    Full Text Available The decreasing performance of jet engines during operation is a major concern for airlines and maintenance companies. Among other effects, the erosion of high-pressure compressor (HPC blades is a critical one and leads to a changed aerodynamic behavior, and therefore to a change in performance. The maintenance of BLISKs (blade-integrated-disks is especially challenging because the blade arrangement cannot be changed and individual blades cannot be replaced. Thus, coupled deteriorated blades have a complex aerodynamic behavior which can have a stronger influence on compressor performance than a conventional HPC. To ensure effective maintenance for BLISKs, the impact of coupled misshaped blades are the key factor. The present study addresses these effects on the aerodynamic performance of a first-stage BLISK of a high-pressure compressor. Therefore, a design of experiments (DoE is done to identify the geometric properties which lead to a reduction in performance. It is shown that the effect of coupled variances is dependent on the operating point. Based on the DoE analysis, the thickness-related parameters, the stagger angle, and the max. profile camber as coupled parameters are identified as the most important parameters for all operating points.

  10. Sensitivity analysis for matched pair analysis of binary data: From worst case to average case analysis.

    Science.gov (United States)

    Hasegawa, Raiden; Small, Dylan

    2017-12-01

    In matched observational studies where treatment assignment is not randomized, sensitivity analysis helps investigators determine how sensitive their estimated treatment effect is to some unmeasured confounder. The standard approach calibrates the sensitivity analysis according to the worst case bias in a pair. This approach will result in a conservative sensitivity analysis if the worst case bias does not hold in every pair. In this paper, we show that for binary data, the standard approach can be calibrated in terms of the average bias in a pair rather than worst case bias. When the worst case bias and average bias differ, the average bias interpretation results in a less conservative sensitivity analysis and more power. In many studies, the average case calibration may also carry a more natural interpretation than the worst case calibration and may also allow researchers to incorporate additional data to establish an empirical basis with which to calibrate a sensitivity analysis. We illustrate this with a study of the effects of cellphone use on the incidence of automobile accidents. Finally, we extend the average case calibration to the sensitivity analysis of confidence intervals for attributable effects. © 2017, The International Biometric Society.

  11. Application of Stochastic Sensitivity Analysis to Integrated Force Method

    Directory of Open Access Journals (Sweden)

    X. F. Wei

    2012-01-01

    Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.

  12. The EVEREST project: sensitivity analysis of geological disposal systems

    International Nuclear Information System (INIS)

    Marivoet, Jan; Wemaere, Isabelle; Escalier des Orres, Pierre; Baudoin, Patrick; Certes, Catherine; Levassor, Andre; Prij, Jan; Martens, Karl-Heinz; Roehlig, Klaus

    1997-01-01

    The main objective of the EVEREST project is the evaluation of the sensitivity of the radiological consequences associated with the geological disposal of radioactive waste to the different elements in the performance assessment. Three types of geological host formations are considered: clay, granite and salt. The sensitivity studies that have been carried out can be partitioned into three categories according to the type of uncertainty taken into account: uncertainty in the model parameters, uncertainty in the conceptual models and uncertainty in the considered scenarios. Deterministic as well as stochastic calculational approaches have been applied for the sensitivity analyses. For the analysis of the sensitivity to parameter values, the reference technique, which has been applied in many evaluations, is stochastic and consists of a Monte Carlo simulation followed by a linear regression. For the analysis of conceptual model uncertainty, deterministic and stochastic approaches have been used. For the analysis of uncertainty in the considered scenarios, mainly deterministic approaches have been applied

  13. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  14. Multiple predictor smoothing methods for sensitivity analysis: Example results

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  15. Cost analysis of small hydroelectric power plants components and preliminary estimation of global cost

    International Nuclear Information System (INIS)

    Basta, C.; Olive, W.J.; Antunes, J.S.

    1990-01-01

    An analysis of cost for each components of Small Hydroelectric Power Plant, taking into account the real costs of these projects is shown. It also presents a global equation which allows a preliminary estimation of cost for each construction. (author)

  16. Photobiomodulation in the Prevention of Tooth Sensitivity Caused by In-Office Dental Bleaching. A Randomized Placebo Preliminary Study.

    Science.gov (United States)

    Calheiros, Andrea Paiva Corsetti; Moreira, Maria Stella; Gonçalves, Flávia; Aranha, Ana Cecília Correa; Cunha, Sandra Ribeiro; Steiner-Oliveira, Carolina; Eduardo, Carlos de Paula; Ramalho, Karen Müller

    2017-08-01

    Analyze the effect of photobiomodulation in the prevention of tooth sensitivity after in-office dental bleaching. Tooth sensitivity is a common clinical consequence of dental bleaching. Therapies for prevention of sensitivity have been investigated in literature. This study was developed as a randomized, placebo blind clinical trial. Fifty patients were selected (n = 10) and randomly divided into five groups: (1) control, (2) placebo, (3) laser before bleaching, (4) laser after bleaching, and (5) laser before and after bleaching. Irradiation was performed perpendicularly, in contact, on each tooth during 10 sec per point in two points. The first point was positioned in the middle of the tooth crown and the second in the periapical region. Photobiomodulation was applied using the following parameters: 780 nm, 40 mW, 10 J/cm 2 , 0.4 J per point. Pain was analyzed before, immediately after, and seven subsequent days after bleaching. Patients were instructed to report pain using the scale: 0 = no tooth sensitivity, 1 = gentle sensitivity, 2 = moderate sensitivity, 3 = severe sensitivity. There were no statistical differences between groups at any time (p > 0.05). More studies, with others parameters and different methods of tooth sensitivity analysis, should be performed to complement the results found. Within the limitation of the present study, the laser parameters of photobiomodulation tested in the present study were not efficient in preventing tooth sensitivity after in-office bleaching.

  17. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    Science.gov (United States)

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  18. Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2014-11-01

    Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.

  19. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    KAUST Repository

    Navarro, María

    2016-12-26

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  20. Preliminary assessment of terrestrial microalgae isolated from lichens as testing species for environmental monitoring: lichen phycobionts present high sensitivity to environmental micropollutants.

    Science.gov (United States)

    Domínguez-Morueco, N; Moreno, H; Barreno, E; Catalá, M

    2014-01-01

    Bioassays constitute a tool for pollution analysis providing a holistic approach and high-quality indication of the toxicity. Microbioassays allow evaluating the toxicity of many samples, implying lower costs and enabling routine monitoring and pollution control. But tests conducted so far are limited to the use of a small number of taxa. Lichens are excellent bioindicators of pollution with great ecological significance. Studies show that the phycobiont is more sensitive to pollutants than the mycobiont. Phycobiont have features such as adaptation to anhydrobiosis and relatively rapid growth in vitro, making them suitable for microbioassays. Our aim is to determine the sensitivity of phycobionts to the pharmaceutical micropollutants carbamazepine and diclofenac as a preliminary step for the development of a toxicity microbioassay based on phycobionts. Optical dispersion and chlorophyll autofluorescence were used as endpoints of toxicity on two algal species showing that suspensions present cyclic and taxon specific patterns of aggregation. Trebouxia TR9 suspensions present a very high grade of aggregation while Asterochloris erici cells do not. Both micropollutants alter optical properties of the suspensions of both species. No significant alteration of chlorophyll autofluorescence by carbamazepine is observed. A. erici chlorophyll autofluorescence is extremely sensitive to diclofenac but the effect is not dependent on the drug concentration or on the time of exposure. Differently, TR9 only shows punctual chlorophyll alterations. Fluctuations in optical dispersion may indicate changes in the population structure of the species, including reproductive strategy. A. erici seems more sensitive to micropollutants, is better characterized and is available from commercial collections. © 2013 Published by Elsevier Inc.

  1. The Organic Food Market and Marketing Initiatives in Europe: a Preliminary Analysis

    DEFF Research Database (Denmark)

    Kristensen, Niels Heine; Nielsen, Thorkild; Bruselius-Jensen, Maria Louisa

    2003-01-01

    Kristensen NH, Nielsen T, Bruselius-Jensen M, Scheperlen-Bøgh P, Beckie M, Foster C, Midmore P, Padel S (2002): The Organic Food Market and Marketing Initiatives in Europe: a Preliminary Analysis. Final Report to the EU Commission......Kristensen NH, Nielsen T, Bruselius-Jensen M, Scheperlen-Bøgh P, Beckie M, Foster C, Midmore P, Padel S (2002): The Organic Food Market and Marketing Initiatives in Europe: a Preliminary Analysis. Final Report to the EU Commission...

  2. Allergen Sensitization Pattern by Sex: A Cluster Analysis in Korea.

    Science.gov (United States)

    Ohn, Jungyoon; Paik, Seung Hwan; Doh, Eun Jin; Park, Hyun-Sun; Yoon, Hyun-Sun; Cho, Soyun

    2017-12-01

    Allergens tend to sensitize simultaneously. Etiology of this phenomenon has been suggested to be allergen cross-reactivity or concurrent exposure. However, little is known about specific allergen sensitization patterns. To investigate the allergen sensitization characteristics according to gender. Multiple allergen simultaneous test (MAST) is widely used as a screening tool for detecting allergen sensitization in dermatologic clinics. We retrospectively reviewed the medical records of patients with MAST results between 2008 and 2014 in our Department of Dermatology. A cluster analysis was performed to elucidate the allergen-specific immunoglobulin (Ig)E cluster pattern. The results of MAST (39 allergen-specific IgEs) from 4,360 cases were analyzed. By cluster analysis, 39items were grouped into 8 clusters. Each cluster had characteristic features. When compared with female, the male group tended to be sensitized more frequently to all tested allergens, except for fungus allergens cluster. The cluster and comparative analysis results demonstrate that the allergen sensitization is clustered, manifesting allergen similarity or co-exposure. Only the fungus cluster allergens tend to sensitize female group more frequently than male group.

  3. A general first-order global sensitivity analysis method

    International Nuclear Information System (INIS)

    Xu Chonggang; Gertner, George Zdzislaw

    2008-01-01

    Fourier amplitude sensitivity test (FAST) is one of the most popular global sensitivity analysis techniques. The main mechanism of FAST is to assign each parameter with a characteristic frequency through a search function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency. Although FAST has been widely applied, there are two limitations: (1) the aliasing effect among parameters by using integer characteristic frequencies and (2) the suitability for only models with independent parameters. In this paper, we synthesize the improvement to overcome the aliasing effect limitation [Tarantola S, Gatelli D, Mara TA. Random balance designs for the estimation of first order global sensitivity indices. Reliab Eng Syst Safety 2006; 91(6):717-27] and the improvement to overcome the independence limitation [Xu C, Gertner G. Extending a global sensitivity analysis technique to models with correlated parameters. Comput Stat Data Anal 2007, accepted for publication]. In this way, FAST can be a general first-order global sensitivity analysis method for linear/nonlinear models with as many correlated/uncorrelated parameters as the user specifies. We apply the general FAST to four test cases with correlated parameters. The results show that the sensitivity indices derived by the general FAST are in good agreement with the sensitivity indices derived by the correlation ratio method, which is a non-parametric method for models with correlated parameters

  4. Preliminary Study of Position-Sensitive Large-Area Radiation Portal Monitor

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Chang Hwy; Kim, Hyunok; Moon, Myung Kook [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Jongyul [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Jong Won; Lim, Yong Kon [Korea Institute of Ocean Science and Technology, Daejeon (Korea, Republic of)

    2013-10-15

    An RPM, which is a passive inspection method, is a system for monitoring the movement of radioactive materials at an airport, seaport, border, etc. To detect a γ-ray, an RPM using the plastic scintillator is generally used. The method of γ-ray detection using an RPM with a plastic scintillator is to measure lights generated by an incident γ-ray in the scintillator. Generally, a large-area RPM uses one or two photomultiplier tubes (PMT) for light collection. However, in this study, we developed a 4-ch RPM that can measure the radiation signal using 4 PMTs. The reason for using 4 PMTs is to calculate the position of the radiation source. In addition, we developed an electric device for acquisition of a 4-ch output signal at the same time. To estimate the performance of the developed RPM, we performed an RPM test using a {sup 60}Co γ-ray check source. In this study, we performed the development of a 4-ch RPM. The major function of the typical RPM is to measure the radiation. However, we developed a position-sensitive 4-ch RPM, which can be used to measure the location of the radiation source, as well as the radiation measurement, at the same time. In the future, we plan to develop an algorithm for a position detection of the radiation. In addition, an algorithm will be applied to an RPM.

  5. Reversing the mere exposure effect in spider fearfuls: Preliminary evidence of sensitization.

    Science.gov (United States)

    Becker, Eni S; Rinck, Mike

    2016-12-01

    A mere exposure effect (MEE) is said to occur when individuals' liking of a suboptimally and repeatedly presented stimulus increases compared to never-presented stimuli, while they are unable to indicate which stimuli were previously presented and which were not. In two experiments, we used the MEE to study automatic evaluative processes in highly spider-fearful individuals (SFs). Pictures of spiders and butterflies were repeatedly presented suboptimally to SFs and to non-anxious controls (NACs). In Experiment 1, both groups showed the MEE for butterflies, preferring previously presented butterfly pictures over new ones. For spider pictures, only NACs showed an MEE, whereas SFs showed no preference. Experiment 2 involved a more unpleasant presentation situation, because for each picture, participants had the difficult task to indicate what had been presented to them. This led to a reversed MEE for spiders in SFs: They preferred new spider pictures over previously presented ones. In both experiments, no evidence was observed for the ability to differentiate between old an new pictures. The results are tentatively explained within Zajonc' theory of the MEE, and they are related to the concept of sensitization in anxiety disorders. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Oxygenates in automotive fuels. Consequence analysis - preliminary study

    International Nuclear Information System (INIS)

    Brandberg, Aa.; Saevbark, B.

    1994-01-01

    Oxygenates is used in gasoline due to several reasons. They are added as high-octane components in unleaded gasoline and as agents to reduce the emission of harmful substances. Oxygenates produced from biomass might constitute a coming market for alternative fuels. This preliminary study describes the prerequisites and consequences of such an oxygenate utilization. 39 refs, 9 figs, 5 tabs

  7. Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes

    International Nuclear Information System (INIS)

    Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae

    2016-01-01

    Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed

  8. Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae [NESS, Daejeon (Korea, Republic of)

    2016-10-15

    Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed.

  9. [Management of cytostatic drugs by nurses: analysis of preliminary results].

    Science.gov (United States)

    Bilski, Bartosz

    2004-01-01

    Cytostatic drugs pose a quite specific occupational risk to health care workers. There is a wide range of potential harmful effects, including remote effects, exerted by this group of drugs. In Polish and international regulations, standards of work safety and hygiene concerning these substances are clearly defined. Nevertheless working conditions in Polish health care institutions are now mostly influenced by economic and organizational problems, which may also be reflected in the compliance with the work safety rules. This paper presents a preliminary analysis of subjective assessment of practice with regard to the management of cytostatics reported by nurses, an occupational group mostly exposed to these substances. The study was carried out at hospital departments in the Warmińsko-Mazurskie Voivodship, where exposure of the staff to these drugs was observed. The study covered the whole nursing staff exposed. Completed questionnaires were obtained from 60 nurses, aged +/- 32 years (20-54 years) with job seniority +/- 8 years (2-18), including 58 nurses with secondary education and two university graduates. Undergraduate education did not develop in respondents skills to work with cytostatics. There is a need to increase the involvement of nursing schools, research institutes and teaching hospitals in the improvement of vocational training of nurses working with cytostatic drugs. To this end, all nurses should be covered with the obligatory training how to handle this group of drugs. The respondents reported that they had acquired their knowledge and experience of managing cytostatics in their work and during training organized at workplace. Despite the acquired knowledge and experience the interviewed nurses did not always comply with work safety and hygiene regulations. The problem of exposure to cytostatic drugs in the form of tablets was most frequently neglected. Some of the nurses were additionally exposed to ionizing radiation. Shortage of the nursing

  10. A tool model for predicting atmospheric kinetics with sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.

  11. Sensitivity analysis of the nuclear data for MYRRHA reactor modelling

    International Nuclear Information System (INIS)

    Stankovskiy, Alexey; Van den Eynde, Gert; Cabellos, Oscar; Diez, Carlos J.; Schillebeeckx, Peter; Heyse, Jan

    2014-01-01

    A global sensitivity analysis of effective neutron multiplication factor k eff to the change of nuclear data library revealed that JEFF-3.2T2 neutron-induced evaluated data library produces closer results to ENDF/B-VII.1 than does JEFF-3.1.2. The analysis of contributions of individual evaluations into k eff sensitivity allowed establishing the priority list of nuclides for which uncertainties on nuclear data must be improved. Detailed sensitivity analysis has been performed for two nuclides from this list, 56 Fe and 238 Pu. The analysis was based on a detailed survey of the evaluations and experimental data. To track the origin of the differences in the evaluations and their impact on k eff , the reaction cross-sections and multiplicities in one evaluation have been substituted by the corresponding data from other evaluations. (authors)

  12. Cooling via one hand improves physical performance in heat-sensitive individuals with Multiple Sclerosis: A preliminary study

    Directory of Open Access Journals (Sweden)

    Murray Julie

    2008-05-01

    Full Text Available Abstract Background Many individuals afflicted with multiple sclerosis (MS experience a transient worsening of symptoms when body temperature increases due to ambient conditions or physical activity. Resulting symptom exacerbations can limit performance. We hypothesized that extraction of heat from the body through the subcutaneous retia venosa that underlie the palmar surfaces of the hands would reduce exercise-related heat stress and thereby increase the physical performance capacity of heat-sensitive individuals with MS. Methods Ten ambulatory MS patients completed one or more randomized paired trials of walking on a treadmill in a temperate environment with and without cooling. Stop criteria were symptom exacerbation and subjective fatigue. The cooling treatment entailed inserting one hand into a rigid chamber through an elastic sleeve that formed an airtight seal around the wrist. A small vacuum pump created a -40 mm Hg subatmospheric pressure enviinside the chamber where the palmar surface of the hand rested on a metal surface maintained at 18–22°C. During the treatment trials, the device was suspended from above the treadmill on a bungee cord so the subjects could comfortably keep a hand in the device without having to bear its weight while walking on the treadmill. Results When the trials were grouped by treatment only, cooling treatment increased exercise durations by 33% (43.6 ± 17.1 min with treatment vs. 32.8 ± 10.9 min. without treatment, mean ± SD, p -6, paired t-test, n = 26. When the average values were calculated for the subjects who performed multiple trials before the treatment group results were compared, cooling treatment increased exercise duration by 35% (42.8 ± 16.4 min with treatment vs. 31.7 ± 9.8 min. without treatment, mean ± SD, p Conclusion These preliminary results suggest that utilization of the heat transfer capacity of the non-hairy skin surfaces can enable temperature-sensitive individuals with MS to

  13. Deterministic Local Sensitivity Analysis of Augmented Systems - I: Theory

    International Nuclear Information System (INIS)

    Cacuci, Dan G.; Ionescu-Bujor, Mihaela

    2005-01-01

    This work provides the theoretical foundation for the modular implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for large-scale simulation systems. The implementation of the ASAP commences with a selected code module and then proceeds by augmenting the size of the adjoint sensitivity system, module by module, until the entire system is completed. Notably, the adjoint sensitivity system for the augmented system can often be solved by using the same numerical methods used for solving the original, nonaugmented adjoint system, particularly when the matrix representation of the adjoint operator for the augmented system can be inverted by partitioning

  14. The identification of model effective dimensions using global sensitivity analysis

    International Nuclear Information System (INIS)

    Kucherenko, Sergei; Feil, Balazs; Shah, Nilay; Mauntz, Wolfgang

    2011-01-01

    It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.

  15. The identification of model effective dimensions using global sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kucherenko, Sergei, E-mail: s.kucherenko@ic.ac.u [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Feil, Balazs [Department of Process Engineering, University of Pannonia, Veszprem (Hungary); Shah, Nilay [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Mauntz, Wolfgang [Lehrstuhl fuer Anlagensteuerungstechnik, Fachbereich Chemietechnik, Universitaet Dortmund (Germany)

    2011-04-15

    It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.

  16. Application of Sensitivity Analysis in Design of Sustainable Buildings

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik; Rasmussen, Henrik

    2009-01-01

    satisfies the design objectives and criteria. In the design of sustainable buildings, it is beneficial to identify the most important design parameters in order to more efficiently develop alternative design solutions or reach optimized design solutions. Sensitivity analyses make it possible to identify...... possible to influence the most important design parameters. A methodology of sensitivity analysis is presented and an application example is given for design of an office building in Denmark....

  17. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    Science.gov (United States)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  18. Sensitivity analysis of network DEA illustrated in branch banking

    OpenAIRE

    N. Avkiran

    2010-01-01

    Users of data envelopment analysis (DEA) often presume efficiency estimates to be robust. While traditional DEA has been exposed to various sensitivity studies, network DEA (NDEA) has so far escaped similar scrutiny. Thus, there is a need to investigate the sensitivity of NDEA, further compounded by the recent attention it has been receiving in literature. NDEA captures the underlying performance information found in a firm?s interacting divisions or sub-processes that would otherwise remain ...

  19. Sensitivity analysis of periodic errors in heterodyne interferometry

    International Nuclear Information System (INIS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-01-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors

  20. Sensitivity analysis of periodic errors in heterodyne interferometry

    Science.gov (United States)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  1. MOVES sensitivity analysis update : Transportation Research Board Summer Meeting 2012 : ADC-20 Air Quality Committee

    Science.gov (United States)

    2012-01-01

    OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study

  2. Sensitivity analysis of the reactor safety study. Final report

    International Nuclear Information System (INIS)

    Parkinson, W.J.; Rasmussen, N.C.; Hinkle, W.D.

    1979-01-01

    The Reactor Safety Study (RSS) or Wash 1400 developed a methodology estimating the public risk from light water nuclear reactors. In order to give further insights into this study, a sensitivity analysis has been performed to determine the significant contributors to risk for both the PWR and BWR. The sensitivity to variation of the point values of the failure probabilities reported in the RSS was determined for the safety systems identified therein, as well as for many of the generic classes from which individual failures contributed to system failures. Increasing as well as decreasing point values were considered. An analysis of the sensitivity to increasing uncertainty in system failure probabilities was also performed. The sensitivity parameters chosen were release category probabilities, core melt probability, and the risk parameters of early fatalities, latent cancers and total property damage. The latter three are adequate for describing all public risks identified in the RSS. The results indicate reductions of public risk by less than a factor of two for factor reductions in system or generic failure probabilities as high as one hundred. There also appears to be more benefit in monitoring the most sensitive systems to verify adherence to RSS failure rates than to backfitting present reactors. The sensitivity analysis results do indicate, however, possible benefits in reducing human error rates

  3. Parametric sensitivity analysis of an agro-economic model of management of irrigation water

    Science.gov (United States)

    El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse

    2015-04-01

    The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.

  4. Preliminary Analysis of Helicopter Options to Support Tunisian Counterterrorism Operations

    Science.gov (United States)

    2016-04-27

    helicopters from Sikorsky to fulfill a number of roles in counterterrorism operations. Rising costs and delays in delivery raised the question of...whether other cost-effective options exist to meet Tunisia’s helicopter requirement. Approach Our team conducted a preliminary assessment of...alternative helicopters for counterterrorism air assault missions. Any decision to acquire an aircraft must consider many factors, including technical

  5. Preliminary analysis of the proposed BN-600 benchmark core

    International Nuclear Information System (INIS)

    John, T.M.

    2000-01-01

    The Indira Gandhi Centre for Atomic Research is actively involved in the design of Fast Power Reactors in India. The core physics calculations are performed by the computer codes that are developed in-house or by the codes obtained from other laboratories and suitably modified to meet the computational requirements. The basic philosophy of the core physics calculations is to use the diffusion theory codes with the 25 group nuclear cross sections. The parameters that are very sensitive is the core leakage, like the power distribution at the core blanket interface etc. are calculated using transport theory codes under the DSN approximations. All these codes use the finite difference approximation as the method to treat the spatial variation of the neutron flux. Criticality problems having geometries that are irregular to be represented by the conventional codes are solved using Monte Carlo methods. These codes and methods have been validated by the analysis of various critical assemblies and calculational benchmarks. Reactor core design procedure at IGCAR consists of: two and three dimensional diffusion theory calculations (codes ALCIALMI and 3DB); auxiliary calculations, (neutron balance, power distributions, etc. are done by codes that are developed in-house); transport theory corrections from two dimensional transport calculations (DOT); irregular geometry treated by Monte Carlo method (KENO); cross section data library used CV2M (25 group)

  6. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  7. Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks

    Directory of Open Access Journals (Sweden)

    Harry R. Millwater

    2006-01-01

    Full Text Available A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel and common damage mechanisms (inherent defects or surface damage can be considered. The derivation is developed for Monte Carlo sampling such that the existing failure samples are used and the sensitivities are obtained with minimal additional computational time. Variance estimates and confidence bounds of the sensitivity estimates are developed. The methodology is demonstrated and verified using a multizone probabilistic fatigue analysis of a gas turbine compressor disk analysis considering stress scatter, crack growth propagation scatter, and initial crack size as random variables.

  8. Application of sensitivity analysis for optimized piping support design

    International Nuclear Information System (INIS)

    Tai, K.; Nakatogawa, T.; Hisada, T.; Noguchi, H.; Ichihashi, I.; Ogo, H.

    1993-01-01

    The objective of this study was to see if recent developments in non-linear sensitivity analysis could be applied to the design of nuclear piping systems which use non-linear supports and to develop a practical method of designing such piping systems. In the study presented in this paper, the seismic response of a typical piping system was analyzed using a dynamic non-linear FEM and a sensitivity analysis was carried out. Then optimization for the design of the piping system supports was investigated, selecting the support location and yield load of the non-linear supports (bi-linear model) as main design parameters. It was concluded that the optimized design was a matter of combining overall system reliability with the achievement of an efficient damping effect from the non-linear supports. The analysis also demonstrated sensitivity factors are useful in the planning stage of support design. (author)

  9. Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model

    International Nuclear Information System (INIS)

    Otis, M.D.

    1983-01-01

    Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs

  10. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  11. Sensitivity analysis for missing data in regulatory submissions.

    Science.gov (United States)

    Permutt, Thomas

    2016-07-30

    The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  12. Production, crystallization and preliminary X-ray diffraction analysis of the allergen Can f 2 from Canis familiaris

    International Nuclear Information System (INIS)

    Madhurantakam, Chaithanya; Nilsson, Ola B.; Jönsson, Klas; Grönlund, Hans; Achour, Adnane

    2009-01-01

    The recombinant form of the allergen Can f 2 from C. familiaris was produced, isolated and crystallized in two different forms. Preliminary X-ray diffraction analyses are reported for the two crystal forms of Can f 2. The allergen Can f 2 from dog (Canis familiaris) present in saliva, dander and fur is an important cause of allergic sensitization worldwide. Here, the production, isolation, crystallization and preliminary X-ray diffraction analysis of two crystal forms of recombinant Can f 2 are reported. The first crystal form belonged to space group C222, with unit-cell parameters a = 68.7, b = 77.3, c = 65.1 Å, and diffracted to 1.55 Å resolution, while the second crystal form belonged to space group C2, with unit-cell parameters a = 75.7, b = 48.3, c = 68.7 Å, β = 126.5°, and diffracted to 2.1 Å resolution. Preliminary data analysis indicated the presence of a single molecule in the asymmetric unit for both crystal forms

  13. Variance estimation for sensitivity analysis of poverty and inequality measures

    Directory of Open Access Journals (Sweden)

    Christian Dudel

    2017-04-01

    Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.

  14. Sensitivity analysis of water consumption in an office building

    Science.gov (United States)

    Suchacek, Tomas; Tuhovcak, Ladislav; Rucka, Jan

    2018-02-01

    This article deals with sensitivity analysis of real water consumption in an office building. During a long-term real study, reducing of pressure in its water connection was simulated. A sensitivity analysis of uneven water demand was conducted during working time at various provided pressures and at various time step duration. Correlations between maximal coefficients of water demand variation during working time and provided pressure were suggested. The influence of provided pressure in the water connection on mean coefficients of water demand variation was pointed out, altogether for working hours of all days and separately for days with identical working hours.

  15. Probabilistic and sensitivity analysis of Botlek Bridge structures

    Directory of Open Access Journals (Sweden)

    Králik Juraj

    2017-01-01

    Full Text Available This paper deals with the probabilistic and sensitivity analysis of the largest movable lift bridge of the world. The bridge system consists of six reinforced concrete pylons and two steel decks 4000 tons weight each connected through ropes with counterweights. The paper focuses the probabilistic and sensitivity analysis as the base of dynamic study in design process of the bridge. The results had a high importance for practical application and design of the bridge. The model and resistance uncertainties were taken into account in LHS simulation method.

  16. Applying DEA sensitivity analysis to efficiency measurement of Vietnamese universities

    Directory of Open Access Journals (Sweden)

    Thi Thanh Huyen Nguyen

    2015-11-01

    Full Text Available The primary purpose of this study is to measure the technical efficiency of 30 doctorate-granting universities, the universities or the higher education institutes with PhD training programs, in Vietnam, applying the sensitivity analysis of data envelopment analysis (DEA. The study uses eight sets of input-output specifications using the replacement as well as aggregation/disaggregation of variables. The measurement results allow us to examine the sensitivity of the efficiency of these universities with the sets of variables. The findings also show the impact of variables on their efficiency and its “sustainability”.

  17. Protective Alternatives of SMR against Extreme Threat Scenario – A Preliminary Risk Analysis

    International Nuclear Information System (INIS)

    Shohet, I.M.; Ornai, D.; Gal, E.; Ronen, Y.; Vidra, M.

    2014-01-01

    The article presents a preliminary risk analysis of the main features in NPP (Nuclear Power Plant) that includes SMR - Small and Modular Reactors, given an extreme threat scenario. A review of the structure and systems of the SMR is followed by systematic definitions and analysis of the threat scenario to which a preliminary risk analysis was carried out. The article outlines the basic events caused by the referred threat scenario, which had led to possible failure mechanisms according to FTA (Fault-Tree-Analysis),critical protective circuits, and todetecting critical topics for the protection and safety of the reactor

  18. Preliminary analysis of engineered barrieer performances in geological disposal of high level waste

    International Nuclear Information System (INIS)

    Ohe, Toshiaki; Maki, Yasuo; Tanaka, Hiroshi; Kawanishi, Motoi.

    1988-01-01

    This report represents preliminary results of safety analysis of a engineered barrier system in geological disposal of high level radioactive waste. Three well-known computer codes; ORIGEN 2, TRUMP, and SWIFT were used in the simulation. Main conceptual design of the repository was almost identical to that of SKB in Sweden and NAGRA in Switzerland; the engineered barrier conasists glass solidified waste, steel overpack, and compacted bentonite. Two different underground formations are considered; granite and neogene sedimentary rock, which are typically found in Japan. We first determined the repository configuration, particularly the space between disposal pitts. The ORIGEN 2 was used to estimate heat generation in the waste glass reprocessed at 4 years after removal from PWR. Then, temperature distribution was calculated by the TRUMP. The results of two or three dimensional calculation indicated that the pit interval should be kept more than 5 m in the case of granite formation at 500 m depth, according to the temperature criteria in the bentonite layer ( 90 Sr, 241 Am, 239 Pu, and 237 Np were chosen in one or two dimensional calculations. For both cases of steady release and instanteneous release, the maximum concentration in the pore water at the boundary between bentonite and surrounding rock had the following order; 237 Np> 239 Pu> 90 Sr> 241 Am. Sensitivity analysis showed that the order mainly due to the different adsorption characteristics of the nuclides in bentonite layer. (author)

  19. Thermal Hydraulic Analysis of K-DEMO Single Blanket Module for Preliminary Accident Analysis using MELCOR

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Sung Bo; Bang, In Cheol [UNIST, Ulsan (Korea, Republic of)

    2016-05-15

    To develop the Korean fusion commercial reactor, preliminary design concept for K-DEMO (Korean fusion demonstration reactor) has been announced by NFRI (National Fusion Research Institute). This pre-conceptual study of K-DEMO has been introduced to identify technical details of a fusion power plant for the future commercialization of fusion reactor in Korea. Before this consideration, to build the K-DEMO, accident analysis is essential. Since the Fukushima accident, which is severe accident from unexpected disaster, safety analysis of nuclear power plant has become important. The safety analysis of both fission and fusion reactors is deemed crucial in demonstrating the low radiological effect of these reactors on the environment, during severe accidents. A risk analysis of K-DEMO should be performed, as a prerequisite for the construction of a fusion reactor. In this research, thermal-hydraulic analysis of single blanket module of K-DEMO is conducted for preliminary accident analysis for K-DEMO. Further study about effect of flow distributer is conducted. The normal K-DEMO operation condition is applied to the boundary condition and simulated to verify the material temperature limit using MELCOR. MELCOR is fully integrated, relatively fast-running code developed by Sandia National Laboratories. MELCOR had been used for Light Water Reactors and fusion reactor version of MELCOR was developed for ITER accident analysis. This study shows the result of thermal-hydraulic simulation of single blanket module with MELCOR which is severe accident code for nuclear fusion safety analysis. The difference of mass flow rate for each coolant channel with or without flow distributer is presented. With flow distributer, advantage of broadening temperature gradient in the K-DEMO blanket module and increase mass flow toward first wall is obtained. This can enhance the safety of K-DEMO blanket module. Most 13 .deg. C temperature difference in blanket module is obtained.

  20. Seismic analysis of steam generator and parameter sensitivity studies

    International Nuclear Information System (INIS)

    Qian Hao; Xu Dinggen; Yang Ren'an; Liang Xingyun

    2013-01-01

    Background: The steam generator (SG) serves as the primary means for removing the heat generated within the reactor core and is part of the reactor coolant system (RCS) pressure boundary. Purpose: Seismic analysis in required for SG, whose seismic category is Cat. I. Methods: The analysis model of SG is created with moisture separator assembly and tube bundle assembly herein. The seismic analysis is performed with RCS pipe and Reactor Pressure Vessel (RPV). Results: The seismic stress results of SG are obtained. In addition, parameter sensitivities of seismic analysis results are studied, such as the effect of another SG, support, anti-vibration bars (AVBs), and so on. Our results show that seismic results are sensitive to support and AVBs setting. Conclusions: The guidance and comments on these parameters are summarized for equipment design and analysis, which should be focused on in future new type NPP SG's research and design. (authors)

  1. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1990-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems

  2. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1991-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab

  3. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  4. Automated sensitivity analysis: New tools for modeling complex dynamic systems

    International Nuclear Information System (INIS)

    Pin, F.G.

    1987-01-01

    Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed

  5. The Volatility of Data Space: Topology Oriented Sensitivity Analysis

    Science.gov (United States)

    Du, Jing; Ligmann-Zielinska, Arika

    2015-01-01

    Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929

  6. Interactive Building Design Space Exploration Using Regionalized Sensitivity Analysis

    DEFF Research Database (Denmark)

    Østergård, Torben; Jensen, Rasmus Lund; Maagaard, Steffen

    2017-01-01

    simulation inputs are most important and which have negligible influence on the model output. Popular sensitivity methods include the Morris method, variance-based methods (e.g. Sobol’s), and regression methods (e.g. SRC). However, all these methods only address one output at a time, which makes it difficult...... in combination with the interactive parallel coordinate plot (PCP). The latter is an effective tool to explore stochastic simulations and to find high-performing building designs. The proposed methods help decision makers to focus their attention to the most important design parameters when exploring......Monte Carlo simulations combined with regionalized sensitivity analysis provide the means to explore a vast, multivariate design space in building design. Typically, sensitivity analysis shows how the variability of model output relates to the uncertainties in models inputs. This reveals which...

  7. Sensitization trajectories in childhood revealed by using a cluster analysis

    DEFF Research Database (Denmark)

    Schoos, Ann-Marie M.; Chawes, Bo L.; Melen, Erik

    2017-01-01

    Prospective Studies on Asthma in Childhood 2000 (COPSAC2000) birth cohort with specific IgE against 13 common food and inhalant allergens at the ages of ½, 1½, 4, and 6 years. An unsupervised cluster analysis for 3-dimensional data (nonnegative sparse parallel factor analysis) was used to extract latent......BACKGROUND: Assessment of sensitization at a single time point during childhood provides limited clinical information. We hypothesized that sensitization develops as specific patterns with respect to age at debut, development over time, and involved allergens and that such patterns might be more...... biologically and clinically relevant. OBJECTIVE: We sought to explore latent patterns of sensitization during the first 6 years of life and investigate whether such patterns associate with the development of asthma, rhinitis, and eczema. METHODS: We investigated 398 children from the at-risk Copenhagen...

  8. Sensitivity analysis of power depression and axial power factor effect on fuel pin to temperature and related properties distribution

    International Nuclear Information System (INIS)

    Suwardi, S.

    2001-01-01

    The presented paper is a preliminary step to evaluate the effect of radial and axial distribution of power generation on thermal analysis of whole fuel pin model with large L/D ratio. The model takes into account both radial and axial distribution of power generation due to power depression and core geometry, temperature and microstructure dependent on thermal conductivity. The microstructure distribution and the gap conductance for typical steady-state situation are given for the sensitivity analysis. The temperature and thermal conductivity distribution along the radial and axial directions obtained by different power distribution is used to indicate the sensitivity of power depression and power factor on thermal aspect. The evaluation is made for one step of incremental time and steady state approach is used. The analysis has been performed using a finite element-finite difference model. The result for typical reactor fuel shows that the sensitivity is too important to be omitted in thermal model

  9. Time-dependent reliability sensitivity analysis of motion mechanisms

    International Nuclear Information System (INIS)

    Wei, Pengfei; Song, Jingwen; Lu, Zhenzhou; Yue, Zhufeng

    2016-01-01

    Reliability sensitivity analysis aims at identifying the source of structure/mechanism failure, and quantifying the effects of each random source or their distribution parameters on failure probability or reliability. In this paper, the time-dependent parametric reliability sensitivity (PRS) analysis as well as the global reliability sensitivity (GRS) analysis is introduced for the motion mechanisms. The PRS indices are defined as the partial derivatives of the time-dependent reliability w.r.t. the distribution parameters of each random input variable, and they quantify the effect of the small change of each distribution parameter on the time-dependent reliability. The GRS indices are defined for quantifying the individual, interaction and total contributions of the uncertainty in each random input variable to the time-dependent reliability. The envelope function method combined with the first order approximation of the motion error function is introduced for efficiently estimating the time-dependent PRS and GRS indices. Both the time-dependent PRS and GRS analysis techniques can be especially useful for reliability-based design. This significance of the proposed methods as well as the effectiveness of the envelope function method for estimating the time-dependent PRS and GRS indices are demonstrated with a four-bar mechanism and a car rack-and-pinion steering linkage. - Highlights: • Time-dependent parametric reliability sensitivity analysis is presented. • Time-dependent global reliability sensitivity analysis is presented for mechanisms. • The proposed method is especially useful for enhancing the kinematic reliability. • An envelope method is introduced for efficiently implementing the proposed methods. • The proposed method is demonstrated by two real planar mechanisms.

  10. Probabilistic sensitivity analysis of system availability using Gaussian processes

    International Nuclear Information System (INIS)

    Daneshkhah, Alireza; Bedford, Tim

    2013-01-01

    The availability of a system under a given failure/repair process is a function of time which can be determined through a set of integral equations and usually calculated numerically. We focus here on the issue of carrying out sensitivity analysis of availability to determine the influence of the input parameters. The main purpose is to study the sensitivity of the system availability with respect to the changes in the main parameters. In the simplest case that the failure repair process is (continuous time/discrete state) Markovian, explicit formulae are well known. Unfortunately, in more general cases availability is often a complicated function of the parameters without closed form solution. Thus, the computation of sensitivity measures would be time-consuming or even infeasible. In this paper, we show how Sobol and other related sensitivity measures can be cheaply computed to measure how changes in the model inputs (failure/repair times) influence the outputs (availability measure). We use a Bayesian framework, called the Bayesian analysis of computer code output (BACCO) which is based on using the Gaussian process as an emulator (i.e., an approximation) of complex models/functions. This approach allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than other methods. The emulator-based sensitivity measure is used to examine the influence of the failure and repair densities' parameters on the system availability. We discuss how to apply the methods practically in the reliability context, considering in particular the selection of parameters and prior distributions and how we can ensure these may be considered independent—one of the key assumptions of the Sobol approach. The method is illustrated on several examples, and we discuss the further implications of the technique for reliability and maintenance analysis

  11. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  12. Sensitivity Analysis Applied in Design of Low Energy Office Building

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik

    2008-01-01

    satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...

  13. Application of Sensitivity Analysis in Design of Sustainable Buildings

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik; Hesselholt, Allan Tind

    2007-01-01

    satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...

  14. Sensitivity analysis of physiochemical interaction model: which pair ...

    African Journals Online (AJOL)

    ... of two model parameters at a time on the solution trajectory of physiochemical interaction over a time interval. Our aim is to use this powerful mathematical technique to select the important pair of parameters of this physical process which is cost-effective. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 ...

  15. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  16. Sensitivity analysis for contagion effects in social networks

    Science.gov (United States)

    VanderWeele, Tyler J.

    2014-01-01

    Analyses of social network data have suggested that obesity, smoking, happiness and loneliness all travel through social networks. Individuals exert “contagion effects” on one another through social ties and association. These analyses have come under critique because of the possibility that homophily from unmeasured factors may explain these statistical associations and because similar findings can be obtained when the same methodology is applied to height, acne and head-aches, for which the conclusion of contagion effects seems somewhat less plausible. We use sensitivity analysis techniques to assess the extent to which supposed contagion effects for obesity, smoking, happiness and loneliness might be explained away by homophily or confounding and the extent to which the critique using analysis of data on height, acne and head-aches is relevant. Sensitivity analyses suggest that contagion effects for obesity and smoking cessation are reasonably robust to possible latent homophily or environmental confounding; those for happiness and loneliness are somewhat less so. Supposed effects for height, acne and head-aches are all easily explained away by latent homophily and confounding. The methodology that has been employed in past studies for contagion effects in social networks, when used in conjunction with sensitivity analysis, may prove useful in establishing social influence for various behaviors and states. The sensitivity analysis approach can be used to address the critique of latent homophily as a possible explanation of associations interpreted as contagion effects. PMID:25580037

  17. Sensitivity Analysis of a Horizontal Earth Electrode under Impulse ...

    African Journals Online (AJOL)

    This paper presents the sensitivity analysis of an earthing conductor under the influence of impulse current arising from a lightning stroke. The approach is based on the 2nd order finite difference time domain (FDTD). The earthing conductor is regarded as a lossy transmission line where it is divided into series connected ...

  18. Beyond the GUM: variance-based sensitivity analysis in metrology

    International Nuclear Information System (INIS)

    Lira, I

    2016-01-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand. (paper)

  19. Sensitivity analysis of the Ohio phosphorus risk index

    Science.gov (United States)

    The Phosphorus (P) Index is a widely used tool for assessing the vulnerability of agricultural fields to P loss; yet, few of the P Indices developed in the U.S. have been evaluated for their accuracy. Sensitivity analysis is one approach that can be used prior to calibration and field-scale testing ...

  20. Sensitivity analysis for oblique incidence reflectometry using Monte Carlo simulations

    DEFF Research Database (Denmark)

    Kamran, Faisal; Andersen, Peter E.

    2015-01-01

    profiles. This article presents a sensitivity analysis of the technique in turbid media. Monte Carlo simulations are used to investigate the technique and its potential to distinguish the small changes between different levels of scattering. We present various regions of the dynamic range of optical...

  1. Omitted Variable Sensitivity Analysis with the Annotated Love Plot

    Science.gov (United States)

    Hansen, Ben B.; Fredrickson, Mark M.

    2014-01-01

    The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…

  2. Weighting-Based Sensitivity Analysis in Causal Mediation Studies

    Science.gov (United States)

    Hong, Guanglei; Qin, Xu; Yang, Fan

    2018-01-01

    Through a sensitivity analysis, the analyst attempts to determine whether a conclusion of causal inference could be easily reversed by a plausible violation of an identification assumption. Analytic conclusions that are harder to alter by such a violation are expected to add a higher value to scientific knowledge about causality. This article…

  3. Sensitivity analysis of railpad parameters on vertical railway track dynamics

    NARCIS (Netherlands)

    Oregui Echeverria-Berreyarza, M.; Nunez Vicencio, Alfredo; Dollevoet, R.P.B.J.; Li, Z.

    2016-01-01

    This paper presents a sensitivity analysis of railpad parameters on vertical railway track dynamics, incorporating the nonlinear behavior of the fastening (i.e., downward forces compress the railpad whereas upward forces are resisted by the clamps). For this purpose, solid railpads, rail-railpad

  4. Methods for global sensitivity analysis in life cycle assessment

    NARCIS (Netherlands)

    Groen, Evelyne A.; Bokkers, Eddy; Heijungs, Reinout; Boer, de Imke J.M.

    2017-01-01

    Purpose: Input parameters required to quantify environmental impact in life cycle assessment (LCA) can be uncertain due to e.g. temporal variability or unknowns about the true value of emission factors. Uncertainty of environmental impact can be analysed by means of a global sensitivity analysis to

  5. Sensitivity analysis on ultimate strength of aluminium stiffened panels

    DEFF Research Database (Denmark)

    Rigo, P.; Sarghiuta, R.; Estefen, S.

    2003-01-01

    This paper presents the results of an extensive sensitivity analysis carried out by the Committee III.1 "Ultimate Strength" of ISSC?2003 in the framework of a benchmark on the ultimate strength of aluminium stiffened panels. Previously, different benchmarks were presented by ISSC committees on ul...

  6. Sensitivity and specificity of coherence and phase synchronization analysis

    International Nuclear Information System (INIS)

    Winterhalder, Matthias; Schelter, Bjoern; Kurths, Juergen; Schulze-Bonhage, Andreas; Timmer, Jens

    2006-01-01

    In this Letter, we show that coherence and phase synchronization analysis are sensitive but not specific in detecting the correct class of underlying dynamics. We propose procedures to increase specificity and demonstrate the power of the approach by application to paradigmatic dynamic model systems

  7. Sensitivity Analysis of Structures by Virtual Distortion Method

    DEFF Research Database (Denmark)

    Gierlinski, J.T.; Holnicki-Szulc, J.; Sørensen, John Dalsgaard

    1991-01-01

    are used in structural optimization, see Haftka [4]. The recently developed Virtual Distortion Method (VDM) is a numerical technique which offers an efficient approach to calculation of the sensitivity derivatives. This method has been orginally applied to structural remodelling and collapse analysis, see...

  8. Design tradeoff studies and sensitivity analysis. Appendix B

    Energy Technology Data Exchange (ETDEWEB)

    1979-05-25

    The results of the design trade-off studies and the sensitivity analysis of Phase I of the Near Term Hybrid Vehicle (NTHV) Program are presented. The effects of variations in the design of the vehicle body, propulsion systems, and other components on vehicle power, weight, cost, and fuel economy and an optimized hybrid vehicle design are discussed. (LCL)

  9. Preliminary Hazard Analysis applied to Uranium Hexafluoride - UF6 production plant

    International Nuclear Information System (INIS)

    Tomzhinsky, David; Bichmacher, Ricardo; Braganca Junior, Alvaro; Peixoto, Orpet Jose

    1996-01-01

    The purpose of this paper is to present the results of the Preliminary hazard Analysis applied to the UF 6 Production Process, which is part of the UF 6 Conversion Plant. The Conversion Plant has designed to produce a high purified UF 6 in accordance with the nuclear grade standards. This Preliminary Hazard Analysis is the first step in the Risk Management Studies, which are under current development. The analysis evaluated the impact originated from the production process in the plant operators, members of public, equipment, systems and installations as well as the environment. (author)

  10. Sensitivity analysis and power for instrumental variable studies.

    Science.gov (United States)

    Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S

    2018-03-31

    In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.

  11. Sensitivity analysis of LOFT L2-5 test calculations

    International Nuclear Information System (INIS)

    Prosek, Andrej

    2014-01-01

    The uncertainty quantification of best-estimate code predictions is typically accompanied by a sensitivity analysis, in which the influence of the individual contributors to uncertainty is determined. The objective of this study is to demonstrate the improved fast Fourier transform based method by signal mirroring (FFTBM-SM) for the sensitivity analysis. The sensitivity study was performed for the LOFT L2-5 test, which simulates the large break loss of coolant accident. There were 14 participants in the BEMUSE (Best Estimate Methods-Uncertainty and Sensitivity Evaluation) programme, each performing a reference calculation and 15 sensitivity runs of the LOFT L2-5 test. The important input parameters varied were break area, gap conductivity, fuel conductivity, decay power etc. For the influence of input parameters on the calculated results the FFTBM-SM was used. The only difference between FFTBM-SM and original FFTBM is that in the FFTBM-SM the signals are symmetrized to eliminate the edge effect (the so called edge is the difference between the first and last data point of one period of the signal) in calculating average amplitude. It is very important to eliminate unphysical contribution to the average amplitude, which is used as a figure of merit for input parameter influence on output parameters. The idea is to use reference calculation as 'experimental signal', 'sensitivity run' as 'calculated signal', and average amplitude as figure of merit for sensitivity instead for code accuracy. The larger is the average amplitude the larger is the influence of varied input parameter. The results show that with FFTBM-SM the analyst can get good picture of the contribution of the parameter variation to the results. They show when the input parameters are influential and how big is this influence. FFTBM-SM could be also used to quantify the influence of several parameter variations on the results. However, the influential parameters could not be

  12. Sensitivity/uncertainty analysis of a borehole scenario comparing Latin Hypercube Sampling and deterministic sensitivity approaches

    International Nuclear Information System (INIS)

    Harper, W.V.; Gupta, S.K.

    1983-10-01

    A computer code was used to study steady-state flow for a hypothetical borehole scenario. The model consists of three coupled equations with only eight parameters and three dependent variables. This study focused on steady-state flow as the performance measure of interest. Two different approaches to sensitivity/uncertainty analysis were used on this code. One approach, based on Latin Hypercube Sampling (LHS), is a statistical sampling method, whereas, the second approach is based on the deterministic evaluation of sensitivities. The LHS technique is easy to apply and should work well for codes with a moderate number of parameters. Of deterministic techniques, the direct method is preferred when there are many performance measures of interest and a moderate number of parameters. The adjoint method is recommended when there are a limited number of performance measures and an unlimited number of parameters. This unlimited number of parameters capability can be extremely useful for finite element or finite difference codes with a large number of grid blocks. The Office of Nuclear Waste Isolation will use the technique most appropriate for an individual situation. For example, the adjoint method may be used to reduce the scope to a size that can be readily handled by a technique such as LHS. Other techniques for sensitivity/uncertainty analysis, e.g., kriging followed by conditional simulation, will be used also. 15 references, 4 figures, 9 tables

  13. Preliminary analysis of the transient overpower accident for CRBRP. Final report

    International Nuclear Information System (INIS)

    Kastenberg, W.E.; Frank, M.V.

    1975-07-01

    A preliminary analysis of the transient overpower accident for the Clinch River Breeder Reactor Plant (CRBRP) is presented. Several uncertainties in the analysis and the estimation of ramp rates during the transition to disassembly are discussed. The major conclusions are summarized

  14. SUMS preliminary design and data analysis development. [shuttle upper atmosphere mass spectrometer experiment

    Science.gov (United States)

    Hinson, E. W.

    1981-01-01

    The preliminary analysis and data analysis system development for the shuttle upper atmosphere mass spectrometer (SUMS) experiment are discussed. The SUMS experiment is designed to provide free stream atmospheric density, pressure, temperature, and mean molecular weight for the high altitude, high Mach number region.

  15. Sensitivity and uncertainty analysis of NET/ITER shielding blankets

    International Nuclear Information System (INIS)

    Hogenbirk, A.; Gruppelaar, H.; Verschuur, K.A.

    1990-09-01

    Results are presented of sensitivity and uncertainty calculations based upon the European fusion file (EFF-1). The effect of uncertainties in Fe, Cr and Ni cross sections on the nuclear heating in the coils of a NET/ITER shielding blanket has been studied. The analysis has been performed for the total cross section as well as partial cross sections. The correct expression for the sensitivity profile was used, including the gain term. The resulting uncertainty in the nuclear heating lies between 10 and 20 per cent. (author). 18 refs.; 2 figs.; 2 tabs

  16. Sensitivity analysis of critical experiments with evaluated nuclear data libraries

    International Nuclear Information System (INIS)

    Fujiwara, D.; Kosaka, S.

    2008-01-01

    Criticality benchmark testing was performed with evaluated nuclear data libraries for thermal, low-enriched uranium fuel rod applications. C/E values for k eff were calculated with the continuous-energy Monte Carlo code MVP2 and its libraries generated from Endf/B-VI.8, Endf/B-VII.0, JENDL-3.3 and JEFF-3.1. Subsequently, the observed k eff discrepancies between libraries were decomposed to specify the source of difference in the nuclear data libraries using sensitivity analysis technique. The obtained sensitivity profiles are also utilized to estimate the adequacy of cold critical experiments to the boiling water reactor under hot operating condition. (authors)

  17. Importance measures in global sensitivity analysis of nonlinear models

    International Nuclear Information System (INIS)

    Homma, Toshimitsu; Saltelli, Andrea

    1996-01-01

    The present paper deals with a new method of global sensitivity analysis of nonlinear models. This is based on a measure of importance to calculate the fractional contribution of the input parameters to the variance of the model prediction. Measures of importance in sensitivity analysis have been suggested by several authors, whose work is reviewed in this article. More emphasis is given to the developments of sensitivity indices by the Russian mathematician I.M. Sobol'. Given that Sobol' treatment of the measure of importance is the most general, his formalism is employed throughout this paper where conceptual and computational improvements of the method are presented. The computational novelty of this study is the introduction of the 'total effect' parameter index. This index provides a measure of the total effect of a given parameter, including all the possible synergetic terms between that parameter and all the others. Rank transformation of the data is also introduced in order to increase the reproducibility of the method. These methods are tested on a few analytical and computer models. The main conclusion of this work is the identification of a sensitivity analysis methodology which is both flexible, accurate and informative, and which can be achieved at reasonable computational cost

  18. Rethinking Sensitivity Analysis of Nuclear Simulations with Topology

    Energy Technology Data Exchange (ETDEWEB)

    Dan Maljovec; Bei Wang; Paul Rosen; Andrea Alfonsi; Giovanni Pastore; Cristian Rabiti; Valerio Pascucci

    2016-01-01

    In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of the sensitivity information -- inherently lacking in visual encodings -- offering limited effectiveness in communicating and exploring the generated data. In this paper, we design a framework for sensitivity analysis and visualization of multidimensional nuclear simulation data using partition-based, topology-inspired regression models and report on its efficacy. We rely on the established Morse-Smale regression technique, which allows us to partition the domain into monotonic regions where easily interpretable linear models can be used to assess the influence of inputs on the output variability. The underlying computation is augmented with an intuitive and interactive visual design to effectively communicate sensitivity information to the nuclear scientists. Our framework is being deployed into the multi-purpose probabilistic risk assessment and uncertainty quantification framework RAVEN (Reactor Analysis and Virtual Control Environment). We evaluate our framework using an simulation dataset studying nuclear fuel performance.

  19. Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.

    Science.gov (United States)

    van Erp, Sara; Mulder, Joris; Oberski, Daniel L

    2017-11-27

    Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  1. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  2. Anisotropic analysis for seismic sensitivity of groundwater monitoring wells

    Science.gov (United States)

    Pan, Y.; Hsu, K.

    2011-12-01

    Taiwan is located at the boundaries of Eurasian Plate and the Philippine Sea Plate. The movement of plate causes crustal uplift and lateral deformation to lead frequent earthquakes in the vicinity of Taiwan. The change of groundwater level trigged by earthquake has been observed and studied in Taiwan for many years. The change of groundwater may appear in oscillation and step changes. The former is caused by seismic waves. The latter is caused by the volumetric strain and reflects the strain status. Since the setting of groundwater monitoring well is easier and cheaper than the setting of strain gauge, the groundwater measurement may be used as a indication of stress. This research proposes the concept of seismic sensitivity of groundwater monitoring well and apply to DonHer station in Taiwan. Geostatistical method is used to analysis the anisotropy of seismic sensitivity. GIS is used to map the sensitive area of the existing groundwater monitoring well.

  3. Sensitivity analysis of predictive models with an automated adjoint generator

    International Nuclear Information System (INIS)

    Pin, F.G.; Oblow, E.M.

    1987-01-01

    The adjoint method is a well established sensitivity analysis methodology that is particularly efficient in large-scale modeling problems. The coefficients of sensitivity of a given response with respect to every parameter involved in the modeling code can be calculated from the solution of a single adjoint run of the code. Sensitivity coefficients provide a quantitative measure of the importance of the model data in calculating the final results. The major drawback of the adjoint method is the requirement for calculations of very large numbers of partial derivatives to set up the adjoint equations of the model. ADGEN is a software system that has been designed to eliminate this drawback and automatically implement the adjoint formulation in computer codes. The ADGEN system will be described and its use for improving performance assessments and predictive simulations will be discussed. 8 refs., 1 fig

  4. Sensitivity analysis of time-dependent laminar flows

    International Nuclear Information System (INIS)

    Hristova, H.; Etienne, S.; Pelletier, D.; Borggaard, J.

    2004-01-01

    This paper presents a general sensitivity equation method (SEM) for time dependent incompressible laminar flows. The SEM accounts for complex parameter dependence and is suitable for a wide range of problems. The formulation is verified on a problem with a closed form solution obtained by the method of manufactured solution. Systematic grid convergence studies confirm the theoretical rates of convergence in both space and time. The methodology is then applied to pulsatile flow around a square cylinder. Computations show that the flow starts with symmetrical vortex shedding followed by a transition to the traditional Von Karman street (alternate vortex shedding). Simulations show that the transition phase manifests itself earlier in the sensitivity fields than in the flow field itself. Sensitivities are then demonstrated for fast evaluation of nearby flows and uncertainty analysis. (author)

  5. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-01-01

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community

  6. A preliminary analysis of the reactor-based plutonium disposition alternative deployment schedules

    International Nuclear Information System (INIS)

    Zurn, R.M.

    1997-09-01

    This paper discusses the preliminary analysis of the implementation schedules of the reactor-based plutonium disposition alternatives. These schedule analyses are a part of a larger process to examine the nine decision criteria used to determine the most appropriate method of disposing of U.S. surplus weapons plutonium. The preliminary analysis indicates that the mission durations for the reactor-based alternatives range from eleven years to eighteen years and the initial mission fuel assemblies containing surplus weapons-usable plutonium could be loaded into the reactors between nine and fourteen years after the Record of Decision

  7. Purification, crystallization and preliminary X-ray structure analysis of the laccase from Ganoderma lucidum

    International Nuclear Information System (INIS)

    Lyashenko, Andrey V.; Belova, Oksana; Gabdulkhakov, Azat G.; Lashkov, Alexander A.; Lisov, Alexandr V.; Leontievsky, Alexey A.; Mikhailov, Al’bert M.

    2011-01-01

    The purification, crystallization and preliminary X-ray structure analysis of the laccase from G. lucidum are reported. The ligninolytic enzymes of the basidiomycetes play a key role in the global carbon cycle. A characteristic property of these enzymes is their broad substrate specificity, which has led to their use in various biotechnologies, thus stimulating research into the three-dimensional structures of ligninolytic enzymes. This paper presents the purification, crystallization and preliminary X-ray analysis of the laccase from the ligninolytic basidiomycete Ganoderma lucidum

  8. NRC staff preliminary analysis of public comments on advance notice of proposed rulemaking on emergency planning

    International Nuclear Information System (INIS)

    Peabody, C.A.; Hickey, J.W.N.

    1980-01-01

    The Nuclear Regulatory Commission (NRC) published an advance notice of proposed rulemaking on emergency planning on July 17, 1979 (44 FR 41483). In October and November 1979, the NRC staff submitted several papers to the Commission related to the emergency planning rulemaking. One of these papers was a preliminary analysis of public comments received on the advance notice (SECY-79-591B, November 13, 1979). This document consists of the preliminary analysis as it was submitted to the Commission, with minor editorial changes

  9. Parameter uncertainty effects on variance-based sensitivity analysis

    International Nuclear Information System (INIS)

    Yu, W.; Harris, T.J.

    2009-01-01

    In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used

  10. Understanding dynamics using sensitivity analysis: caveat and solution

    Science.gov (United States)

    2011-01-01

    Background Parametric sensitivity analysis (PSA) has become one of the most commonly used tools in computational systems biology, in which the sensitivity coefficients are used to study the parametric dependence of biological models. As many of these models describe dynamical behaviour of biological systems, the PSA has subsequently been used to elucidate important cellular processes that regulate this dynamics. However, in this paper, we show that the PSA coefficients are not suitable in inferring the mechanisms by which dynamical behaviour arises and in fact it can even lead to incorrect conclusions. Results A careful interpretation of parametric perturbations used in the PSA is presented here to explain the issue of using this analysis in inferring dynamics. In short, the PSA coefficients quantify the integrated change in the system behaviour due to persistent parametric perturbations, and thus the dynamical information of when a parameter perturbation matters is lost. To get around this issue, we present a new sensitivity analysis based on impulse perturbations on system parameters, which is named impulse parametric sensitivity analysis (iPSA). The inability of PSA and the efficacy of iPSA in revealing mechanistic information of a dynamical system are illustrated using two examples involving switch activation. Conclusions The interpretation of the PSA coefficients of dynamical systems should take into account the persistent nature of parametric perturbations involved in the derivation of this analysis. The application of PSA to identify the controlling mechanism of dynamical behaviour can be misleading. By using impulse perturbations, introduced at different times, the iPSA provides the necessary information to understand how dynamics is achieved, i.e. which parameters are essential and when they become important. PMID:21406095

  11. Sensitivity Analysis of Deviation Source for Fast Assembly Precision Optimization

    Directory of Open Access Journals (Sweden)

    Jianjun Tang

    2014-01-01

    Full Text Available Assembly precision optimization of complex product has a huge benefit in improving the quality of our products. Due to the impact of a variety of deviation source coupling phenomena, the goal of assembly precision optimization is difficult to be confirmed accurately. In order to achieve optimization of assembly precision accurately and rapidly, sensitivity analysis of deviation source is proposed. First, deviation source sensitivity is defined as the ratio of assembly dimension variation and deviation source dimension variation. Second, according to assembly constraint relations, assembly sequences and locating, deviation transmission paths are established by locating the joints between the adjacent parts, and establishing each part’s datum reference frame. Third, assembly multidimensional vector loops are created using deviation transmission paths, and the corresponding scalar equations of each dimension are established. Then, assembly deviation source sensitivity is calculated by using a first-order Taylor expansion and matrix transformation method. Finally, taking assembly precision optimization of wing flap rocker as an example, the effectiveness and efficiency of the deviation source sensitivity analysis method are verified.

  12. Sensitivity analysis for improving nanomechanical photonic transducers biosensors

    International Nuclear Information System (INIS)

    Fariña, D; Álvarez, M; Márquez, S; Lechuga, L M; Dominguez, C

    2015-01-01

    The achievement of high sensitivity and highly integrated transducers is one of the main challenges in the development of high-throughput biosensors. The aim of this study is to improve the final sensitivity of an opto-mechanical device to be used as a reliable biosensor. We report the analysis of the mechanical and optical properties of optical waveguide microcantilever transducers, and their dependency on device design and dimensions. The selected layout (geometry) based on two butt-coupled misaligned waveguides displays better sensitivities than an aligned one. With this configuration, we find that an optimal microcantilever thickness range between 150 nm and 400 nm would increase both microcantilever bending during the biorecognition process and increase optical sensitivity to 4.8   ×   10 −2  nm −1 , an order of magnitude higher than other similar opto-mechanical devices. Moreover, the analysis shows that a single mode behaviour of the propagating radiation is required to avoid modal interference that could misinterpret the readout signal. (paper)

  13. Thick Concrete Specimen Construction, Testing, and Preliminary Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Clayton, Dwight A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hoegh, Kyle [Univ. of Minnesota, Minneapolis, MN (United States); Khazanovich, Lev [Univ. of Minnesota, Minneapolis, MN (United States)

    2015-03-01

    The purpose of the U.S. Department of Energy Office of Nuclear Energy’s Light Water Reactor Sustainability (LWRS) Program is to develop technologies and other solutions that can improve the reliability, sustain the safety, and extend the operating lifetimes of nuclear power plants (NPPs) beyond 60 years. Since many important safety structures in an NPP are constructed of concrete, inspection techniques must be developed and tested to evaluate the internal condition. In-service containment structures generally do not allow for the destructive measures necessary to validate the accuracy of these inspection techniques. This creates a need for comparative testing of the various nondestructive evaluation (NDE) measurement techniques on concrete specimens with known material properties, voids, internal microstructure flaws, and reinforcement locations. A preliminary report detailed some of the challenges associated with thick reinforced concrete sections and prioritized conceptual designs of specimens that could be fabricated to represent NPP concrete structures for using in NDE evaluation comparisons. This led to the construction of the concrete specimen presented in this report, which has sufficient reinforcement density and cross-sectional size to represent an NPP containment wall. Details on how a suitably thick concrete specimen was constructed are presented, including the construction materials, final nominal design schematic, as well as formwork and rigging required to safely meet the desired dimensions of the concrete structure. The report also details the type and methods of forming the concrete specimen as well as information on how the rebar and simulated defects were embedded. Details on how the resulting specimen was transported, safely anchored, and marked to allow access for systematic comparative NDE testing of defects in a representative NPP containment wall concrete specimen are also given. Data collection using the MIRA Ultrasonic NDE equipment and

  14. SENSIT: a cross-section and design sensitivity and uncertainty analysis code. [In FORTRAN for CDC-7600, IBM 360

    Energy Technology Data Exchange (ETDEWEB)

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.

  15. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  16. Therapeutic Implications from Sensitivity Analysis of Tumor Angiogenesis Models

    Science.gov (United States)

    Poleszczuk, Jan; Hahnfeldt, Philip; Enderling, Heiko

    2015-01-01

    Anti-angiogenic cancer treatments induce tumor starvation and regression by targeting the tumor vasculature that delivers oxygen and nutrients. Mathematical models prove valuable tools to study the proof-of-concept, efficacy and underlying mechanisms of such treatment approaches. The effects of parameter value uncertainties for two models of tumor development under angiogenic signaling and anti-angiogenic treatment are studied. Data fitting is performed to compare predictions of both models and to obtain nominal parameter values for sensitivity analysis. Sensitivity analysis reveals that the success of different cancer treatments depends on tumor size and tumor intrinsic parameters. In particular, we show that tumors with ample vascular support can be successfully targeted with conventional cytotoxic treatments. On the other hand, tumors with curtailed vascular support are not limited by their growth rate and therefore interruption of neovascularization emerges as the most promising treatment target. PMID:25785600

  17. Global sensitivity analysis of multiscale properties of porous materials

    Science.gov (United States)

    Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.

    2018-02-01

    Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.

  18. Sensitivity analysis overlaps of friction elements in cartridge seals

    Directory of Open Access Journals (Sweden)

    Žmindák Milan

    2018-01-01

    Full Text Available Cartridge seals are self-contained units consisting of a shaft sleeve, seals, and gland plate. The applications of mechanical seals are numerous. The most common example of application is in bearing production for automobile industry. This paper deals with the sensitivity analysis of overlaps friction elements in cartridge seal and their influence on the friction torque sealing and compressive force. Furthermore, it describes materials for the manufacture of sealings, approaches usually used to solution of hyperelastic materials by FEM and short introduction into the topic wheel bearings. The practical part contains one of the approach for measurement friction torque, which results were used to specifying the methodology and precision of FEM calculation realized by software ANSYS WORKBENCH. This part also contains the sensitivity analysis of overlaps friction elements.

  19. An overview of the design and analysis of simulation experiments for sensitivity analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2005-01-01

    Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models. This review surveys 'classic' and 'modern' designs for experiments with simulation models. Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc. These designs

  20. Preliminary Coupling of MATRA Code for Multi-physics Analysis

    International Nuclear Information System (INIS)

    Kim, Seongjin; Choi, Jinyoung; Yang, Yongsik; Kwon, Hyouk; Hwang, Daehyun

    2014-01-01

    The boundary conditions such as the inlet temperature, mass flux, averaged heat flux, power distributions of the rods, and core geometry is given by constant values or functions of time. These conditions are separately calculated and provided by other codes, such as a neutronics or a system codes, into the MATRA code. In addition, the coupling of several codes in the different physics field is focused and embodied. In this study, multiphysics coupling methods were developed for a subchannel code (MATRA) with neutronics codes (MASTER, DeCART) and a fuel performance code (FRAPCON-3). Preliminary evaluation results for representative sample cases are presented. The MASTER and DeCART codes provide the power distribution of the rods in the core to the MATRA code. In case of the FRAPCON-3 code, the variation of the rod diameter induced by the thermal expansion is yielded and provided. The MATRA code transfers the thermal-hydraulic conditions that each code needs. Moreover, the coupling method with each code is described

  1. Seismic response of transamerical building. I. Data and preliminary analysis

    Science.gov (United States)

    Celebi, M.; Safak, E.

    1991-01-01

    The objective of this paper is to present preliminary analyses of a set of acceleration response records obtained during the October 17, 1989 Loma Prieta earthquake (Ms = 7.1) from the 60-story vertically tapered, pyramid-shaped Trans-america Building-a landmark of San Francisco. The building was instrumented in 1985 with 22 channels of synchronized sensors consisting of 13 uniaxial accelerometers deployed throughout the structure and connected to a central recording system and three triaxial strong-motion accelerographs at three different levels of the structure. No free-field accelerographs are at the site. The acceleration records permit the study of the behavior of this unique structure. The predominant translational response of the building and the associated frequency at approximately 0.28 Hz are identified from the records and their Fourier amplitude spectra. The records do not indicate any significant torsional motion. However, there is rocking type soil-structure interaction, and an associated frequency of approximately 2.0 Hz is identified from the Fourier amplitude spectra of the differential motions between the ground level and that at the basement. In addition, the response spectra for the basement motions indicate significant resonance in both directions at a period of approximately 0.5 seconds.

  2. Preliminary Analysis of Species Partitioning in the DWPF Melter

    Energy Technology Data Exchange (ETDEWEB)

    Choi, A. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kesterson, M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Johnson, F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); McCabe, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-07-15

    The work described in this report is preliminary in nature since its goal was to demonstrate the feasibility of estimating the off-gas entrainment rates from the Defense Waste Processing Facility (DWPF) melter based on a simple mass balance using measured feed and glass pour stream compositions and timeaveraged melter operating data over the duration of one canister-filling cycle. The only case considered in this study involved the SB6 pour stream sample taken while Canister #3472 was being filled over a 20-hour period on 12/20/2010, approximately three months after the bubblers were installed. The analytical results for that pour stream sample provided the necessary glass composition data for the mass balance calculations. To estimate the “matching” feed composition, which is not necessarily the same as that of the Melter Feed Tank (MFT) batch being fed at the time of pour stream sampling, a mixing model was developed involving three preceding MFT batches as well as the one being fed at that time based on the assumption of perfect mixing in the glass pool but with an induction period to account for the process delays involved in the calcination/fusion step in the cold cap and the melter turnover.

  3. Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks

    OpenAIRE

    Harry R. Millwater; R. Wesley Osborn

    2006-01-01

    A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel) and common damage mechanisms (inherent defects or su...

  4. Influence analysis to assess sensitivity of the dropout process

    OpenAIRE

    Molenberghs, Geert; Verbeke, Geert; Thijs, Herbert; Lesaffre, Emmanuel; Kenward, Michael

    2001-01-01

    Diggle and Kenward (Appl. Statist. 43 (1994) 49) proposed a selection model for continuous longitudinal data subject to possible non-random dropout. It has provoked a large debate about the role for such models. The original enthusiasm was followed by skepticism about the strong but untestable assumption upon which this type of models invariably rests. Since then, the view has emerged that these models should ideally be made part of a sensitivity analysis. One of their examples is a set of da...

  5. Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN)

    Science.gov (United States)

    2015-04-01

    determined. From the results of the study, UN is safe to store under normal operating conditions. 15. SUBJECT TERMS urea, nitrate , sensitivity, thermal ...HNO3). Due to its simple composition, ease of manufacture, and higher detonation parameters than ammonium nitrate , it has become one of the...an H50 value of 10.054 ± 0.620 inches. 5. Conclusions From the results of the thermal analysis study, it can be concluded that urea nitrate is

  6. Applications of the TSUNAMI sensitivity and uncertainty analysis methodology

    International Nuclear Information System (INIS)

    Rearden, Bradley T.; Hopper, Calvin M.; Elam, Karla R.; Goluoglu, Sedat; Parks, Cecil V.

    2003-01-01

    The TSUNAMI sensitivity and uncertainty analysis tools under development for the SCALE code system have recently been applied in four criticality safety studies. TSUNAMI is used to identify applicable benchmark experiments for criticality code validation, assist in the design of new critical experiments for a particular need, reevaluate previously computed computational biases, and assess the validation coverage and propose a penalty for noncoverage for a specific application. (author)

  7. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    Science.gov (United States)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  8. Global sensitivity analysis using a Gaussian Radial Basis Function metamodel

    International Nuclear Information System (INIS)

    Wu, Zeping; Wang, Donghui; Okolo N, Patrick; Hu, Fan; Zhang, Weihua

    2016-01-01

    Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on response variables. Amongst the wide range of documented studies on sensitivity measures and analysis, Sobol' indices have received greater portion of attention due to the fact that they can provide accurate information for most models. In this paper, a novel analytical expression to compute the Sobol' indices is derived by introducing a method which uses the Gaussian Radial Basis Function to build metamodels of computationally expensive computer codes. Performance of the proposed method is validated against various analytical functions and also a structural simulation scenario. Results demonstrate that the proposed method is an efficient approach, requiring a computational cost of one to two orders of magnitude less when compared to the traditional Quasi Monte Carlo-based evaluation of Sobol' indices. - Highlights: • RBF based sensitivity analysis method is proposed. • Sobol' decomposition of Gaussian RBF metamodel is obtained. • Sobol' indices of Gaussian RBF metamodel are derived based on the decomposition. • The efficiency of proposed method is validated by some numerical examples.

  9. Systemization of burnup sensitivity analysis code (2) (Contract research)

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2008-08-01

    Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant economic efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristic is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons: the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion

  10. Preliminary homogeneity study of in-house reference material using neutron activation analysis and X-ray fluorescence

    International Nuclear Information System (INIS)

    Gras, N.; Munoz, L.; Cassorla, V.; Castillo, P.

    1993-01-01

    Although many biological reference materials for quality control of trace element analysis are commercially available, there is still a need for additional local materials for special matrices. In the Latin American region a preliminary study has been commenced involving analytical strategies for the characterization of in-house reference material. A biological sample, prepared in Brazil, constitutes the first regional attempt to prepare reference material. It was analyzed by neutron activation analysis (NAA) and X-ray fluorescence (XRF) to verify its homogeneity. The determination of the trace elements and certain major elements was carried out by instrumental NAA. Trace elements such as Cd, Mn, Mo and Cu were determined using NAA with radiochemical separations to improve the sensitivity and precision. XRF was applied only to major constituents and some trace elements with concentration of more than 10 μg/g. From a total of 18 elements analyzed, only Fe, Cr and Sc were not homogeneously distributed. (orig.)

  11. Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy.

    Science.gov (United States)

    Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker

    2015-01-01

    The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy.

  12. B1 -sensitivity analysis of quantitative magnetization transfer imaging.

    Science.gov (United States)

    Boudreau, Mathieu; Stikov, Nikola; Pike, G Bruce

    2018-01-01

    To evaluate the sensitivity of quantitative magnetization transfer (qMT) fitted parameters to B 1 inaccuracies, focusing on the difference between two categories of T 1 mapping techniques: B 1 -independent and B 1 -dependent. The B 1 -sensitivity of qMT was investigated and compared using two T 1 measurement methods: inversion recovery (IR) (B 1 -independent) and variable flip angle (VFA), B 1 -dependent). The study was separated into four stages: 1) numerical simulations, 2) sensitivity analysis of the Z-spectra, 3) healthy subjects at 3T, and 4) comparison using three different B 1 imaging techniques. For typical B 1 variations in the brain at 3T (±30%), the simulations resulted in errors of the pool-size ratio (F) ranging from -3% to 7% for VFA, and -40% to > 100% for IR, agreeing with the Z-spectra sensitivity analysis. In healthy subjects, pooled whole-brain Pearson correlation coefficients for F (comparing measured double angle and nominal flip angle B 1 maps) were ρ = 0.97/0.81 for VFA/IR. This work describes the B 1 -sensitivity characteristics of qMT, demonstrating that it varies substantially on the B 1 -dependency of the T 1 mapping method. Particularly, the pool-size ratio is more robust against B 1 inaccuracies if VFA T 1 mapping is used, so much so that B 1 mapping could be omitted without substantially biasing F. Magn Reson Med 79:276-285, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Fukushima Daiichi unit 1 uncertainty analysis--Preliminary selection of uncertain parameters and analysis methodology

    Energy Technology Data Exchange (ETDEWEB)

    Cardoni, Jeffrey N.; Kalinich, Donald A.

    2014-02-01

    Sandia National Laboratories (SNL) plans to conduct uncertainty analyses (UA) on the Fukushima Daiichi unit (1F1) plant with the MELCOR code. The model to be used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). However, that study only examined a handful of various model inputs and boundary conditions, and the predictions yielded only fair agreement with plant data and current release estimates. The goal of this uncertainty study is to perform a focused evaluation of uncertainty in core melt progression behavior and its effect on key figures-of-merit (e.g., hydrogen production, vessel lower head failure, etc.). In preparation for the SNL Fukushima UA work, a scoping study has been completed to identify important core melt progression parameters for the uncertainty analysis. The study also lays out a preliminary UA methodology.

  14. Comparative sequence analysis of acid sensitive/resistance proteins in Escherichia coli and Shigella flexneri

    Science.gov (United States)

    Manikandan, Selvaraj; Balaji, Seetharaaman; Kumar, Anil; Kumar, Rita

    2007-01-01

    The molecular basis for the survival of bacteria under extreme conditions in which growth is inhibited is a question of great current interest. A preliminary study was carried out to determine residue pattern conservation among the antiporters of enteric bacteria, responsible for extreme acid sensitivity especially in Escherichia coli and Shigella flexneri. Here we found the molecular evidence that proved the relationship between E. coli and S. flexneri. Multiple sequence alignment of the gadC coded acid sensitive antiporter showed many conserved residue patterns at regular intervals at the N-terminal region. It was observed that as the alignment approaches towards the C-terminal, the number of conserved residues decreases, indicating that the N-terminal region of this protein has much active role when compared to the carboxyl terminal. The motif, FHLVFFLLLGG, is well conserved within the entire gadC coded protein at the amino terminal. The motif is also partially conserved among other antiporters (which are not coded by gadC) but involved in acid sensitive/resistance mechanism. Phylogenetic cluster analysis proves the relationship of Escherichia coli and Shigella flexneri. The gadC coded proteins are converged as a clade and diverged from other antiporters belongs to the amino acid-polyamine-organocation (APC) superfamily. PMID:21670792

  15. Probability and sensitivity analysis of machine foundation and soil interaction

    Directory of Open Access Journals (Sweden)

    Králik J., jr.

    2009-06-01

    Full Text Available This paper deals with the possibility of the sensitivity and probabilistic analysis of the reliability of the machine foundation depending on variability of the soil stiffness, structure geometry and compressor operation. The requirements to design of the foundation under rotating machines increased due to development of calculation method and computer tools. During the structural design process, an engineer has to consider problems of the soil-foundation and foundation-machine interaction from the safety, reliability and durability of structure point of view. The advantages and disadvantages of the deterministic and probabilistic analysis of the machine foundation resistance are discussed. The sensitivity of the machine foundation to the uncertainties of the soil properties due to longtime rotating movement of machine is not negligible for design engineers. On the example of compressor foundation and turbine fy. SIEMENS AG the affectivity of the probabilistic design methodology was presented. The Latin Hypercube Sampling (LHS simulation method for the analysis of the compressor foundation reliability was used on program ANSYS. The 200 simulations for five load cases were calculated in the real time on PC. The probabilistic analysis gives us more complex information about the soil-foundation-machine interaction as the deterministic analysis.

  16. Preliminary Experimental Analysis of Soil Stabilizers for Contamination Control

    International Nuclear Information System (INIS)

    Lagos, L.; Varona, J.; Zidan, A.; Gudavalli, R.; Wu, Kuang-His

    2006-01-01

    A major focus of Department of Energy's (DOE's) environmental management mission at the Hanford site involves characterizing and remediating contaminated soil and groundwater; stabilizing contaminated soil; remediating disposal sites; decontaminating and decommissioning structures, and demolishing former plutonium production process buildings, nuclear reactors, and separation plants; maintaining inactive waste sites; transitioning facilities into the surveillance and maintenance program; and mitigating effects to biological and cultural resources from site development and environmental cleanup and restoration activities. For example, a total of 470,914 metric tons of contaminated soil from 100 Areas remediation activities were disposed at the Environmental Restoration Disposal Facility (ERDF) during 2004. The Applied Research Center (ARC) at Florida International University (FIU) is supporting the Hanford's site remediation program by analyzing the effectiveness of several soil stabilizers (fixatives) for contamination control during excavation activities. The study is focusing on determining the effects of varying soil conditions, temperature, humidity and wind velocity on the effectiveness of the candidate stabilizers. The test matrix consists of a soil penetration-depth study, wind tunnel experiments for determination of threshold velocity, and temperature and moisture-controlled drying/curing experiments. These three set of experiments are designed to verify performance metrics, as well as provide insight into what fundamental forces are altered by the use of the stabilizer. This paper only presents the preliminary results obtained during wind tunnel experiments using dry Hanford soil samples (with 2.7% moisture by weight). These dry soil samples were exposed to varying wind speeds from 2.22 m/sec to 8.88 m/sec. Furthermore, airborne particulate data was collected for the dry Hanford soil experiments using an aerosol analyzer instrument. (authors)

  17. Preliminary study of elemental analysis of hydroxyapatite used neutron activation analysis method

    International Nuclear Information System (INIS)

    Yustinus Purwamargapratala; Rina Mulyaningsih

    2010-01-01

    Preliminary study has been carried out elemental analysis of hydroxyapatite synthesized using the method of neutron activation analysis. Hydroxyapatite is the main component constituent of bones and teeth which can be synthesized from limestone and phosphoric acid. Hydroxyapatite can be used as a bone substitute material and human and animal teeth. Tests on the metal content is necessary to prevent the risk of damage to bones and teeth due to contamination. Results of analysis using neutron activation analysis method with samples irradiated at the neutron flux 10"3 n.det"-"1cm"-"2 for one minute, the impurities of Al (48.60±6.47 mg/kg), CI (38.00±7.47 mg/kg), Mn (1.05±0.19 mg/kg), and Mg (2095.30±203.66 mg/kg), were detected, whereas with irradiation time for 10 minutes and 40 minutes with a time decay of three days there were K (103.89 ± 26.82 mg/kg), Br (1617.06 ± 193.66 mg/kg), and Na (125.10±9.57 mg/kg). These results indicate that there is impurity Al, CI, Mn, Mg, Br, K and Na, although in very small amounts and do not cause damage to bones and teeth. (author)

  18. Global Sensitivity Analysis for multivariate output using Polynomial Chaos Expansion

    International Nuclear Information System (INIS)

    Garcia-Cabrejo, Oscar; Valocchi, Albert

    2014-01-01

    Many mathematical and computational models used in engineering produce multivariate output that shows some degree of correlation. However, conventional approaches to Global Sensitivity Analysis (GSA) assume that the output variable is scalar. These approaches are applied on each output variable leading to a large number of sensitivity indices that shows a high degree of redundancy making the interpretation of the results difficult. Two approaches have been proposed for GSA in the case of multivariate output: output decomposition approach [9] and covariance decomposition approach [14] but they are computationally intensive for most practical problems. In this paper, Polynomial Chaos Expansion (PCE) is used for an efficient GSA with multivariate output. The results indicate that PCE allows efficient estimation of the covariance matrix and GSA on the coefficients in the approach defined by Campbell et al. [9], and the development of analytical expressions for the multivariate sensitivity indices defined by Gamboa et al. [14]. - Highlights: • PCE increases computational efficiency in 2 approaches of GSA of multivariate output. • Efficient estimation of covariance matrix of output from coefficients of PCE. • Efficient GSA on coefficients of orthogonal decomposition of the output using PCE. • Analytical expressions of multivariate sensitivity indices from coefficients of PCE

  19. Parametric Sensitivity Analysis of the WAVEWATCH III Model

    Directory of Open Access Journals (Sweden)

    Beng-Chun Lee

    2009-01-01

    Full Text Available The parameters in numerical wave models need to be calibrated be fore a model can be applied to a specific region. In this study, we selected the 8 most important parameters from the source term of the WAVEWATCH III model and subjected them to sensitivity analysis to evaluate the sensitivity of the WAVEWATCH III model to the selected parameters to determine how many of these parameters should be considered for further discussion, and to justify the significance priority of each parameter. After ranking each parameter by sensitivity and assessing their cumulative impact, we adopted the ARS method to search for the optimal values of those parameters to which the WAVEWATCH III model is most sensitive by comparing modeling results with ob served data at two data buoys off the coast of north eastern Taiwan; the goal being to find optimal parameter values for improved modeling of wave development. The procedure adopting optimal parameters in wave simulations did improve the accuracy of the WAVEWATCH III model in comparison to default runs based on field observations at two buoys.

  20. ADGEN: a system for automated sensitivity analysis of predictive models

    International Nuclear Information System (INIS)

    Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.

    1987-01-01

    A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modeling and model validation studies to avoid over modeling, in site characterization planning to avoid over collection of data, and in performance assessments to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed. 7 references, 2 figures

  1. ADGEN: a system for automated sensitivity analysis of predictive models

    International Nuclear Information System (INIS)

    Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.

    1986-09-01

    A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modelling and model validation studies to avoid ''over modelling,'' in site characterization planning to avoid ''over collection of data,'' and in performance assessment to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed

  2. Sensitivity analysis: Interaction of DOE SNF and packaging materials

    International Nuclear Information System (INIS)

    Anderson, P.A.; Kirkham, R.J.; Shaber, E.L.

    1999-01-01

    A sensitivity analysis was conducted to evaluate the technical issues pertaining to possible destructive interactions between spent nuclear fuels (SNFs) and the stainless steel canisters. When issues are identified through such an analysis, they provide the technical basis for answering what if questions and, if needed, for conducting additional analyses, testing, or other efforts to resolve them in order to base the licensing on solid technical grounds. The analysis reported herein systematically assessed the chemical and physical properties and the potential interactions of the materials that comprise typical US Department of Energy (DOE) SNFs and the stainless steel canisters in which they will be stored, transported, and placed in a geologic repository for final disposition. The primary focus in each step of the analysis was to identify any possible phenomena that could potentially compromise the structural integrity of the canisters and to assess their thermodynamic feasibility

  3. Linear regression and sensitivity analysis in nuclear reactor design

    International Nuclear Information System (INIS)

    Kumar, Akansha; Tsvetkov, Pavel V.; McClarren, Ryan G.

    2015-01-01

    Highlights: • Presented a benchmark for the applicability of linear regression to complex systems. • Applied linear regression to a nuclear reactor power system. • Performed neutronics, thermal–hydraulics, and energy conversion using Brayton’s cycle for the design of a GCFBR. • Performed detailed sensitivity analysis to a set of parameters in a nuclear reactor power system. • Modeled and developed reactor design using MCNP, regression using R, and thermal–hydraulics in Java. - Abstract: The paper presents a general strategy applicable for sensitivity analysis (SA), and uncertainity quantification analysis (UA) of parameters related to a nuclear reactor design. This work also validates the use of linear regression (LR) for predictive analysis in a nuclear reactor design. The analysis helps to determine the parameters on which a LR model can be fit for predictive analysis. For those parameters, a regression surface is created based on trial data and predictions are made using this surface. A general strategy of SA to determine and identify the influential parameters those affect the operation of the reactor is mentioned. Identification of design parameters and validation of linearity assumption for the application of LR of reactor design based on a set of tests is performed. The testing methods used to determine the behavior of the parameters can be used as a general strategy for UA, and SA of nuclear reactor models, and thermal hydraulics calculations. A design of a gas cooled fast breeder reactor (GCFBR), with thermal–hydraulics, and energy transfer has been used for the demonstration of this method. MCNP6 is used to simulate the GCFBR design, and perform the necessary criticality calculations. Java is used to build and run input samples, and to extract data from the output files of MCNP6, and R is used to perform regression analysis and other multivariate variance, and analysis of the collinearity of data

  4. Chemical Analysis of the Moon at the Surveyor VI Landing Site: Preliminary Results.

    Science.gov (United States)

    Turkevich, A L; Patterson, J H; Franzgrote, E J

    1968-06-07

    The alpha-scattering experiment aboard soft-landing Surveyor VI has provided a chemical analysis of the surface of the moon in Sinus Medii. The preliminary results indicate that, within experimental errors, the composition is the same as that found by Surveyor V in Mare Tranquillitatis. This finding suggests that large portions of the lunar maria resemble basalt in composition.

  5. Chemical Analysis of the Moon at the Surveyor VII Landing Site: Preliminary Results.

    Science.gov (United States)

    Turkevich, A L; Franzgrote, E J; Patterson, J H

    1968-10-04

    The alpha-scattering experiment aboard Surveyor VII has provided a chemical analysis of the moon in the area of the crater Tycho. The preliminary results indicate a chemical composition similar to that already found at two mare sites, but with a lower concentration of elements of the iron group (titanium through copper).

  6. Current Mooring Design in Partner WECs and Candidates for Preliminary Analysis

    DEFF Research Database (Denmark)

    Thomsen, Jonas Bjerg; Ferri, Francesco; Kofoed, Jens Peter

    This report is the combined report of Commercial Milestone "CM1: Design and Cost of Current Mooring Solutions of Partner WECs" and Milestone "M3: Mooring Solutions for Preliminary Analysis" of the EUDP project "Mooring Solutions for Large Wave Energy Converters". The report covers a description o...

  7. ANSI/ASHRAE/IES Standard 90.1-2013 Preliminary Determination: Quantitative Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Halverson, Mark A.; Rosenberg, Michael I.; Wang, Weimin; Zhang, Jian; Mendon, Vrushali V.; Athalye, Rahul A.; Xie, YuLong; Hart, Reid; Goel, Supriya

    2014-03-01

    This report provides a preliminary quantitative analysis to assess whether buildings constructed according to the requirements of ANSI/ASHRAE/IES Standard 90.1-2013 would result in energy savings compared with buildings constructed to ANSI/ASHRAE/IES Standard 90.1-2010.

  8. Preliminary fire hazard analysis for the PUTDR and TRU trenches in the Solid Waste Burial Ground

    International Nuclear Information System (INIS)

    Gaschott, L.J.

    1995-01-01

    This document represents the Preliminary Fire Hazards Analysis for the Pilot Unvented TRU Drum Retrieval effort and for the Transuranic drum trenches in the low level burial grounds. The FHA was developed in accordance with DOE Order 5480.7A to address major hazards inherent in the facility

  9. A Preliminary Analysis of the Outcomes of Students Assisted by VET FEE-HELP: Summary

    Science.gov (United States)

    National Centre for Vocational Education Research (NCVER), 2015

    2015-01-01

    This summary highlights the key findings from the report "A preliminary analysis of the outcomes of students assisted by VET FEE-HELP". VET FEE-HELP is an income-contingent loan scheme that assists eligible students undertaking certain vocational education training (VET) courses with an approved provider by paying for all or part of…

  10. Expression, purification, crystallization and preliminary X-ray analysis of Aeromonas hydrophilia metallo-β-lactamase

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Nandini, E-mail: nandini-sharma@merck.com; Toney, Jeffrey H.; Fitzgerald, Paula M. D.

    2005-02-01

    Crystallization and preliminary X-ray analysis of the CphA metallo-β-lactamase from A. hydrophilia are described. The crystals belonged to space group P2{sub 1}2{sub 1}2, with unit-cell parameters a = 40.75, b = 42.05, c = 128.88 Å, and diffract to 1.8 Å.

  11. Preliminary safety analysis of unscrammed events for KLFR

    International Nuclear Information System (INIS)

    Kim, S.J.; Ha, G.S.

    2005-01-01

    The report presents the design features of KLFR; Safety Analysis Code; steady-state calculation results and analysis results of unscrammed events. The calculations of the steady-state and unscrammed events have been performed for the conceptual design of KLFR using SSC-K code. UTOP event results in no fuel damage and no centre-line melting. The inherent safety features are demonstrated through the analysis of ULOHS event. Although the analysis of ULOF has much uncertainties in the pump design, the analysis results show the inherent safety characteristics. 6% flow of rated flow of natural circulation is formed in the case of ULOF. In the metallic fuel rod, the cladding temperature is somewhat high due to the low heat transfer coefficient of lead. ULOHS event should be considered in design of RVACS for long-term cooling

  12. Mixed kernel function support vector regression for global sensitivity analysis

    Science.gov (United States)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  13. Optimizing human activity patterns using global sensitivity analysis.

    Science.gov (United States)

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  14. First fungal genome sequence from Africa: A preliminary analysis

    Directory of Open Access Journals (Sweden)

    Rene Sutherland

    2012-01-01

    Full Text Available Some of the most significant breakthroughs in the biological sciences this century will emerge from the development of next generation sequencing technologies. The ease of availability of DNA sequence made possible through these new technologies has given researchers opportunities to study organisms in a manner that was not possible with Sanger sequencing. Scientists will, therefore, need to embrace genomics, as well as develop and nurture the human capacity to sequence genomes and utilise the ’tsunami‘ of data that emerge from genome sequencing. In response to these challenges, we sequenced the genome of Fusarium circinatum, a fungal pathogen of pine that causes pitch canker, a disease of great concern to the South African forestry industry. The sequencing work was conducted in South Africa, making F. circinatum the first eukaryotic organism for which the complete genome has been sequenced locally. Here we report on the process that was followed to sequence, assemble and perform a preliminary characterisation of the genome. Furthermore, details of the computer annotation and manual curation of this genome are presented. The F. circinatum genome was found to be nearly 44 million bases in size, which is similar to that of four other Fusarium genomes that have been sequenced elsewhere. The genome contains just over 15 000 open reading frames, which is less than that of the related species, Fusarium oxysporum, but more than that for Fusarium verticillioides. Amongst the various putative gene clusters identified in F. circinatum, those encoding the secondary metabolites fumosin and fusarin appeared to harbour evidence of gene translocation. It is anticipated that similar comparisons of other loci will provide insights into the genetic basis for pathogenicity of the pitch canker pathogen. Perhaps more importantly, this project has engaged a relatively large group of scientists

  15. Sensitivity analysis practices: Strategies for model-based inference

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Ratto, Marco; Tarantola, Stefano; Campolongo, Francesca

    2006-01-01

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA

  16. Sensitivity analysis practices: Strategies for model-based inference

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)

    2006-10-15

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.

  17. Regional and parametric sensitivity analysis of Sobol' indices

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2015-01-01

    Nowadays, utilizing the Monte Carlo estimators for variance-based sensitivity analysis has gained sufficient popularity in many research fields. These estimators are usually based on n+2 sample matrices well designed for computing both the main and total effect indices, where n is the input dimension. The aim of this paper is to use such n+2 sample matrices to investigate how the main and total effect indices change when the uncertainty of the model inputs are reduced. For this purpose, the regional main and total effect functions are defined for measuring the changes on the main and total effect indices when the distribution range of one input is reduced, and the parametric main and total effect functions are introduced to quantify the residual main and total effect indices due to the reduced variance of one input. Monte Carlo estimators are derived for all the developed sensitivity concepts based on the n+2 samples matrices originally used for computing the main and total effect indices, thus no extra computational cost is introduced. The Ishigami function, a nonlinear model and a planar ten-bar structure are utilized for illustrating the developed sensitivity concepts, and for demonstrating the efficiency and accuracy of the derived Monte Carlo estimators. - Highlights: • The regional main and total effect functions are developed. • The parametric main and total effect functions are introduced. • The proposed sensitivity functions are all generalizations of Sobol' indices. • The Monte Carlo estimators are derived for the four sensitivity functions. • The computational cost of the estimators is the same as that of Sobol' indices

  18. A sensitivity analysis of regional and small watershed hydrologic models

    Science.gov (United States)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  19. Stochastic sensitivity analysis and Langevin simulation for neural network learning

    International Nuclear Information System (INIS)

    Koda, Masato

    1997-01-01

    A comprehensive theoretical framework is proposed for the learning of a class of gradient-type neural networks with an additive Gaussian white noise process. The study is based on stochastic sensitivity analysis techniques, and formal expressions are obtained for stochastic learning laws in terms of functional derivative sensitivity coefficients. The present method, based on Langevin simulation techniques, uses only the internal states of the network and ubiquitous noise to compute the learning information inherent in the stochastic correlation between noise signals and the performance functional. In particular, the method does not require the solution of adjoint equations of the back-propagation type. Thus, the present algorithm has the potential for efficiently learning network weights with significantly fewer computations. Application to an unfolded multi-layered network is described, and the results are compared with those obtained by using a back-propagation method

  20. An easily implemented static condensation method for structural sensitivity analysis

    Science.gov (United States)

    Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.

    1990-01-01

    A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.

  1. Sensitivity Analysis as a Tool to assess Energy-Water Nexus in India

    Science.gov (United States)

    Priyanka, P.; Banerjee, R.

    2017-12-01

    Rapid urbanization, population growth and related structural changes with-in the economy of a developing country act as a stressor on energy and water demand, which forms a well-established energy-water nexus. Energy-water nexus is thoroughly studied at various spatial scales viz. city level, river basin level and national level- to guide different stakeholders for sustainable management of energy and water. However, temporal dimensions of energy-water nexus at national level have not been thoroughly investigated because of unavailability of relevant time-series data. In this study we investigated energy-water nexus at national level using environmentally-extended input-output tables for Indian economy (2004-2013) as provided by EORA database. Perturbation based sensitivity analysis is proposed to highlight the critical nodes of interactions among economic sectors which is further linked to detect the synergistic effects of energy and water consumption. Technology changes (interpreted as change in value of nodes) results in modification of interactions among economic sectors and synergy is affected through direct as well as indirect effects. Indirect effects are not easily understood through preliminary examination of data, hence sensitivity analysis within an input-output framework is important to understand the indirect effects. Furthermore, time series data helps in developing the understanding on dynamics of synergistic effects. We identified the key sectors and technology changes for Indian economy which will provide the better decision support for policy makers about sustainable use of energy-water resources in India.

  2. Preliminary design analysis of the ALT-II limiter for TEXTOR

    International Nuclear Information System (INIS)

    Koski, J.A.; Boyd, R.D.; Kempka, S.M.; Romig, A.D. Jr.; Smith, M.F.; Watson, R.D.; Whitley, J.B.; Conn, R.W.; Grotz, S.P.

    1984-01-01

    Installation of a large toroidal belt pump limiter, Advanced Limiter Test II (ALT-II), on the TEXTOR tokamak at Juelich, FRG is anticipated for early 1986. This paper discusses the preliminary mechanical design and materials considerations undertaken as part of the feasibility study phase for ALT-II. Since the actively cooled limiter blade is the component in direct contact with the plasma edge, and thus subject to the severe plasma environment, most preliminary design efforts have concentrated on analysis of the blade. The screening process which led to the recommended preliminary design consisting of a dispersion strenghthened copper or OFHC copper cover plate over an austenitic stainless steel base plate is discussed. A 1 to 3 mm thick low atomic number coating consisting of a graded plasma-sprayed Silicon Carbide-Aluminium composite is recommended subject to further experiment and evaluation. Thermal-hydraulic and stress analyses of the limiter blade are also discussed. (orig.)

  3. ORNL: PWR-BDHT analysis procedure, a preliminary overview

    International Nuclear Information System (INIS)

    Cliff, S.B.

    1978-01-01

    The computer programs currently used in the analysis of the ORNL-PWR Blowdown Heat Transfer Separate-Effects Program are overviewed. The current linkages and relationships among the programs are given along with general comments about the future directions of some of these programs. The overview is strictly from the computer science point of view with only minimal information concerning the engineering aspects of the analysis procedure

  4. Nuclear data sensitivity/uncertainty analysis for XT-ADS

    International Nuclear Information System (INIS)

    Sugawara, Takanori; Sarotto, Massimo; Stankovskiy, Alexey; Van den Eynde, Gert

    2011-01-01

    Highlights: → The sensitivity and uncertainty analyses were performed to comprehend the reliability of the XT-ADS neutronic design. → The uncertainties deduced from the covariance data for the XT-ADS criticality were 0.94%, 1.9% and 1.1% by the SCALE 44-group, TENDL-2009 and JENDL-3.3 data, respectively. → When the target accuracy of 0.3%Δk for the criticality was considered, the uncertainties did not satisfy it. → To achieve this accuracy, the uncertainties should be improved by experiments under an adequate condition. - Abstract: The XT-ADS, an accelerator-driven system for an experimental demonstration, has been investigated in the framework of IP EUROTRANS FP6 project. In this study, the sensitivity and uncertainty analyses were performed to comprehend the reliability of the XT-ADS neutronic design. For the sensitivity analysis, it was found that the sensitivity coefficients were significantly different by changing the geometry models and calculation codes. For the uncertainty analysis, it was confirmed that the uncertainties deduced from the covariance data varied significantly by changing them. The uncertainties deduced from the covariance data for the XT-ADS criticality were 0.94%, 1.9% and 1.1% by the SCALE 44-group, TENDL-2009 and JENDL-3.3 data, respectively. When the target accuracy of 0.3%Δk for the criticality was considered, the uncertainties did not satisfy it. To achieve this accuracy, the uncertainties should be improved by experiments under an adequate condition.

  5. Uncertainty and sensitivity analysis of the nuclear fuel thermal behavior

    Energy Technology Data Exchange (ETDEWEB)

    Boulore, A., E-mail: antoine.boulore@cea.fr [Commissariat a l' Energie Atomique (CEA), DEN, Fuel Research Department, 13108 Saint-Paul-lez-Durance (France); Struzik, C. [Commissariat a l' Energie Atomique (CEA), DEN, Fuel Research Department, 13108 Saint-Paul-lez-Durance (France); Gaudier, F. [Commissariat a l' Energie Atomique (CEA), DEN, Systems and Structure Modeling Department, 91191 Gif-sur-Yvette (France)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer A complete quantitative method for uncertainty propagation and sensitivity analysis is applied. Black-Right-Pointing-Pointer The thermal conductivity of UO{sub 2} is modeled as a random variable. Black-Right-Pointing-Pointer The first source of uncertainty is the linear heat rate. Black-Right-Pointing-Pointer The second source of uncertainty is the thermal conductivity of the fuel. - Abstract: In the global framework of nuclear fuel behavior simulation, the response of the models describing the physical phenomena occurring during the irradiation in reactor is mainly conditioned by the confidence in the calculated temperature of the fuel. Amongst all parameters influencing the temperature calculation in our fuel rod simulation code (METEOR V2), several sources of uncertainty have been identified as being the most sensitive: thermal conductivity of UO{sub 2}, radial distribution of power in the fuel pellet, local linear heat rate in the fuel rod, geometry of the pellet and thermal transfer in the gap. Expert judgment and inverse methods have been used to model the uncertainty of these parameters using theoretical distributions and correlation matrices. Propagation of these uncertainties in the METEOR V2 code using the URANIE framework and a Monte-Carlo technique has been performed in different experimental irradiations of UO{sub 2} fuel. At every time step of the simulated experiments, we get a temperature statistical distribution which results from the initial distributions of the uncertain parameters. We then can estimate confidence intervals of the calculated temperature. In order to quantify the sensitivity of the calculated temperature to each of the uncertain input parameters and data, we have also performed a sensitivity analysis using the Sobol' indices at first order.

  6. Failure mode analysis of preliminary design of ITER divertor impurity monitor

    International Nuclear Information System (INIS)

    Kitazawa, Sin-iti; Ogawa, Hiroaki

    2016-01-01

    Highlights: • Divertor impurity influx monitor for ITER (DIM) is procured by JADA. • DIM is designed to observe light from nuclear fusion plasma directly. • DIM is under preliminary design phase. • Failure mode of DIM was prepared for RAMI analysis. • RAMI analysis on DIM was performed to reduce technical risks. - Abstract: The objective of the divertor impurity influx monitor (DIM) for ITER is to measure the parameters of impurities and hydrogen isotopes (tritium, deuterium, and hydrogen) in divertor plasma using visible and UV spectroscopic techniques in the 200–1000 nm wavelength range. In ITER, special provisions are required to ensure accuracy and full functionality of the diagnostic components under harsh conditions (high temperature, high magnetic field, high vacuum condition, and high radiation field). Japan Domestic Agency is preparing the preliminary design of the ITER DIM system, which will be installed in the upper, equatorial and lower ports. The optical and mechanical designs of the DIM are conducted to fit ITER’s requirements. The optical and mechanical designs meet the requirements of spatial resolution. Some auxiliary systems were examined via prototyping. The preliminary design of the ITER DIM system was evaluated by RAMI analysis. The availability of the designed system is adequately high to satisfy the project requirements. However, some equipment does not have certain designs, and this may cause potential technical risks. The preliminary design should be modified to reduce technical risks and to prepare the final design.

  7. Biosphere dose conversion Factor Importance and Sensitivity Analysis

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty

  8. Summary of the Preliminary Analysis of Savannah River Depleted Uranium Trioxide

    International Nuclear Information System (INIS)

    2010-01-01

    This report summarizes a preliminary special analysis of the Savannah River Depleted Uranium Trioxide waste stream (SVRSURANIUM03, Revision 2). The analysis is considered preliminary because a final waste profile has not been submitted for review. The special analysis is performed to determine the acceptability of the waste stream for shallow land burial at the Area 5 Radioactive Waste Management Site (RWMS) at the Nevada National Security Site (NNSS). The Savannah River Depleted Uranium Trioxide waste stream requires a special analysis because the waste stream's sum of fractions exceeds one. The 99Tc activity concentration is 98 percent of the NNSS Waste Acceptance Criteria and the largest single contributor to the sum of fractions.

  9. Bioelectrical impedance analysis for bovine milk: Preliminary results

    Science.gov (United States)

    Bertemes-Filho, P.; Valicheski, R.; Pereira, R. M.; Paterno, A. S.

    2010-04-01

    This work reports the investigation and analysis of bovine milk quality by using biological impedance measurements using electrical impedance spectroscopy (EIS). The samples were distinguished by a first chemical analysis using Fourier transform midinfrared spectroscopy (FTIR) and flow citometry. A set of milk samples (100ml each) obtained from 17 different cows in lactation with and without mastitis were analyzed with the proposed technique using EIS. The samples were adulterated by adding distilled water and hydrogen peroxide in a controlled manner. FTIR spectroscopy and flow cytometry were performed, and impedance measurements were made in a frequency range from 500Hz up to 1MHz with an implemented EIS system. The system's phase shift was compensated by measuring saline solutions. It was possible to show that the results obtained with the Bioelectrical Impedance Analysis (BIA) technique may detect changes in the milk caused by mastitis and the presence of water and hydrogen peroxide in the bovine milk.

  10. A framework for sensitivity analysis of decision trees.

    Science.gov (United States)

    Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław

    2018-01-01

    In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.

  11. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  12. Isolation and preliminary function analysis of a Na + /H + antiporter ...

    African Journals Online (AJOL)

    A full-length cDNA Na+/H+ antiporter gene (MzNHX1) was isolated from Malus zumi according to the homologous Na+/H+ antiporter gene region in plants. Sequence analysis indicated that the cDNA was 2062 bp in length, including an open reading frame (ORF) of 1629 bp, which encoded a predicted polypeptide of 542 ...

  13. A Preliminary Analysis of a Behavioral Classrooms Needs Assessment

    Science.gov (United States)

    Leaf, Justin B.; Leaf, Ronald; McCray, Cynthia; Lamkins, Carol; Taubman, Mitchell; McEachin, John; Cihon, Joseph H.

    2016-01-01

    Today many special education classrooms implement procedures based upon the principles of Applied Behavior Analysis (ABA) to establish educationally relevant skills and decrease aberrant behaviors. However, it is difficult for school staff and consultants to evaluate the implementation of various components of ABA and general classroom set up. In…

  14. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  15. Sensitivity analysis and design optimization through automatic differentiation

    International Nuclear Information System (INIS)

    Hovland, Paul D; Norris, Boyana; Strout, Michelle Mills; Bhowmick, Sanjukta; Utke, Jean

    2005-01-01

    Automatic differentiation is a technique for transforming a program or subprogram that computes a function, including arbitrarily complex simulation codes, into one that computes the derivatives of that function. We describe the implementation and application of automatic differentiation tools. We highlight recent advances in the combinatorial algorithms and compiler technology that underlie successful implementation of automatic differentiation tools. We discuss applications of automatic differentiation in design optimization and sensitivity analysis. We also describe ongoing research in the design of language-independent source transformation infrastructures for automatic differentiation algorithms

  16. Analysis of Hydrological Sensitivity for Flood Risk Assessment

    Directory of Open Access Journals (Sweden)

    Sanjay Kumar Sharma

    2018-02-01

    Full Text Available In order for the Indian government to maximize Integrated Water Resource Management (IWRM, the Brahmaputra River has played an important role in the undertaking of the Pilot Basin Study (PBS due to the Brahmaputra River’s annual regional flooding. The selected Kulsi River—a part of Brahmaputra sub-basin—experienced severe floods in 2007 and 2008. In this study, the Rainfall-Runoff-Inundation (RRI hydrological model was used to simulate the recent historical flood in order to understand and improve the integrated flood risk management plan. The ultimate objective was to evaluate the sensitivity of hydrologic simulation using different Digital Elevation Model (DEM resources, coupled with DEM smoothing techniques, with a particular focus on the comparison of river discharge and flood inundation extent. As a result, the sensitivity analysis showed that, among the input parameters, the RRI model is highly sensitive to Manning’s roughness coefficient values for flood plains, followed by the source of the DEM, and then soil depth. After optimizing its parameters, the simulated inundation extent showed that the smoothing filter was more influential than its simulated discharge at the outlet. Finally, the calibrated and validated RRI model simulations agreed well with the observed discharge and the Moderate Imaging Spectroradiometer (MODIS-detected flood extents.

  17. Sensitivity analysis for the effects of multiple unmeasured confounders.

    Science.gov (United States)

    Groenwold, Rolf H H; Sterne, Jonathan A C; Lawlor, Debbie A; Moons, Karel G M; Hoes, Arno W; Tilling, Kate

    2016-09-01

    Observational studies are prone to (unmeasured) confounding. Sensitivity analysis of unmeasured confounding typically focuses on a single unmeasured confounder. The purpose of this study was to assess the impact of multiple (possibly weak) unmeasured confounders. Simulation studies were performed based on parameters estimated from the British Women's Heart and Health Study, including 28 measured confounders and assuming no effect of ascorbic acid intake on mortality. In addition, 25, 50, or 100 unmeasured confounders were simulated, with various mutual correlations and correlations with measured confounders. The correlated unmeasured confounders did not need to be strongly associated with exposure and outcome to substantially bias the exposure-outcome association at interest, provided that there are sufficiently many unmeasured confounders. Correlations between unmeasured confounders, in addition to the strength of their relationship with exposure and outcome, are key drivers of the magnitude of unmeasured confounding and should be considered in sensitivity analyses. However, if the unmeasured confounders are correlated with measured confounders, the bias yielded by unmeasured confounders is partly removed through adjustment for the measured confounders. Discussions of the potential impact of unmeasured confounding in observational studies, and sensitivity analyses to examine this, should focus on the potential for the joint effect of multiple unmeasured confounders to bias results. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. High order effects in cross section sensitivity analysis

    International Nuclear Information System (INIS)

    Greenspan, E.; Karni, Y.; Gilai, D.

    1978-01-01

    Two types of high order effects associated with perturbations in the flux shape are considered: Spectral Fine Structure Effects (SFSE) and non-linearity between changes in performance parameters and data uncertainties. SFSE are investigated in Part I using a simple single resonance model. Results obtained for each of the resolved and for representative unresolved resonances of 238 U in a ZPR-6/7 like environment indicate that SFSE can have a significant contribution to the sensitivity of group constants to resonance parameters. Methods to account for SFSE both for the propagation of uncertainties and for the adjustment of nuclear data are discussed. A Second Order Sensitivity Theory (SOST) is presented, and its accuracy relative to that of the first order sensitivity theory and of the direct substitution method is investigated in Part II. The investigation is done for the non-linear problem of the effect of changes in the 297 keV sodium minimum cross section on the transport of neutrons in a deep-penetration problem. It is found that the SOST provides a satisfactory accuracy for cross section uncertainty analysis. For the same degree of accuracy, the SOST can be significantly more efficient than the direct substitution method

  19. Accuracy and sensitivity analysis on seismic anisotropy parameter estimation

    Science.gov (United States)

    Yan, Fuyong; Han, De-Hua

    2018-04-01

    There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.

  20. Preliminary analysis of a 1:4 scale prestressed concrete containment vessel model

    International Nuclear Information System (INIS)

    Dameron, R.A.; Rashid, Y.R.; Luk, V.K.; Hessheimer, M.F.

    1997-01-01

    Sandia National Laboratories is conducting a research program to investigate the integrity of nuclear containment structures. As part of the program Sandia will construct an instrumented 1:4 scale model of a prestressed concrete containment vessel (PCCV) for pressurized water reactors (PWR), which will be pressure tested up to its ultimate capacity. One of the key program objectives is to develop validated methods to predict the structural performance of containment vessels when subjected to beyond design basis loadings. Analytical prediction of structural performance requires a stepwise, systematic approach that addresses all potential failure modes. The analysis effort includes two and three-dimensional nonlinear finite element analyses of the PCCV test model to evaluate its structural performance under very high internal pressurization. Such analyses have been performed using the nonlinear concrete constitutive model, ANACAP-U, in conjunction with the ABAQUS general purpose finite element code. The analysis effort is carried out in three phases: preliminary analysis; pretest prediction; and post-test data interpretation and analysis evaluation. The preliminary analysis phase serves to provide instrumentation support and identify candidate failure modes. The associated tasks include the preliminary prediction of failure pressure and probable failure locations and the development of models to be used in the detailed failure analyses. This paper describes the modeling approaches and some of the results obtained in the first phase of the analysis effort

  1. A global sensitivity analysis of crop virtual water content

    Science.gov (United States)

    Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.

    2015-12-01

    The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for

  2. Pilot Workload and Speech Analysis: A Preliminary Investigation

    Science.gov (United States)

    Bittner, Rachel M.; Begault, Durand R.; Christopher, Bonny R.

    2013-01-01

    Prior research has questioned the effectiveness of speech analysis to measure the stress, workload, truthfulness, or emotional state of a talker. The question remains regarding the utility of speech analysis for restricted vocabularies such as those used in aviation communications. A part-task experiment was conducted in which participants performed Air Traffic Control read-backs in different workload environments. Participant's subjective workload and the speech qualities of fundamental frequency (F0) and articulation rate were evaluated. A significant increase in subjective workload rating was found for high workload segments. F0 was found to be significantly higher during high workload while articulation rates were found to be significantly slower. No correlation was found to exist between subjective workload and F0 or articulation rate.

  3. Sensitivity Analysis on Elbow Piping Components in Seismically Isolated NPP under Seismic Loading

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Hee Kun; Hahm, Dae Gi; Kim, Min Kyu [KAERI, Daejeon (Korea, Republic of); Jeon, Bub Gyu; Kim, Nam Sik [Pusan National University, Busan (Korea, Republic of)

    2016-05-15

    In this study, the FE model is verified using specimen test results and simulation with parameter variations are conducted. Effective parameters will randomly sampled and used as input values for simulations to be applied to the fragility analysis. pipelines are representative of them because they could undergo larger displacements when they are supported on both isolated and non-isolated structures simultaneously. Especially elbows are critical components of pipes under severed loading conditions such as earthquake action because strain is accumulated on them during the repeated bending of the pipe. Therefore, seismic performance of pipe elbow components should be examined thoroughly based on the fragility analysis. Fragility assessment of interface pipe should take different sources of uncertainty into account. However, selection of important sources and repeated tests with many random input values are very time consuming and expensive, so numerical analysis is commonly used. In the present study, finite element (FE) model of elbow component will be validated using the dynamic test results of elbow components. Using the verified model, sensitivity analysis will be implemented as a preliminary process of seismic fragility of piping system. Several important input parameters are selected and how the uncertainty of them are apportioned to the uncertainty of the elbow response is to be studied. Piping elbows are critical components under cyclic loading conditions as they are subjected large displacement. In a seismically isolated NPP, seismic capacity of piping system should be evaluated with caution. Seismic fragility assessment preliminarily needs parameter sensitivity analysis about the output of interest with different input parameter values.

  4. Preliminary analysis of productivity of fruiting fungi on Strzeleckie meadows

    Directory of Open Access Journals (Sweden)

    Barbara Sadowska

    2014-11-01

    Full Text Available Analysis demonstrated that the fresh ahd dry weight as well as the ash content of fungal fruit bodies collected on a forest-surrounded unmown meadow (Stellario-Deschampsietum Freitag 1957 and Caricetum elatae W.Koch 1926 were lower than the same values for a plot of exploited mown meadow and higher than on an exploited unmown meadow (Arrhenatheretum medioeuropaeum (Br.-Bl. Oberd. 1952.

  5. Preliminary analysis on incore performance of nuclear fuel: pt. 4

    International Nuclear Information System (INIS)

    Noh, S.K.; Chang, M.H.; Lee, C.C.; Chung, Y.H.; Kuk, K.Y.; Park, C.Y.; Lee, S.K.

    1981-01-01

    An analysis has been performed for thermal hydraulic design parameters of Wolsung-1 reactor core in steady state with the help of a computer code COBRA-IV-I. The design parameters are coolant enthalpy, flow velocity, coolant quality, pressure and fuel temperature distribution. The maximum power channel has been taken into account in this work. The results appear to be reasonably agreeable with data from PSR'S, with the maximum difference between this work and PSR'S being 4.3%

  6. Job Search Success in Local Labour Markets - A Preliminary Analysis

    OpenAIRE

    Greig, Malcolm; McQuaid, Ronald W.

    2001-01-01

    This study tests the appropriateness of current government employment policies, in particular the New Deal, in targeting specific groups of unemployed jobseekers. A sample of 169 unemployed jobseekers is divided into those who were successful and unsuccessful in finding employment and each group is analysed in terms of their attributes. A factor analysis of these attributes is then carried out in order to develop typical profiles of unsuccessful jobseekers who are possibly in need of special ...

  7. Long-term gas and brine migration at the Waste Isolation Pilot Plant: Preliminary sensitivity analyses for post-closure 40 CFR 268 (RCRA), May 1992

    International Nuclear Information System (INIS)

    1992-12-01

    This report describes preliminary probabilistic sensitivity analyses of long term gas and brine migration at the Waste Isolation Pilot Plant (WIPP). Because gas and brine are potential transport media for organic compounds and heavy metals, understanding two-phase flow in the repository and the surrounding Salado Formation is essential to evaluating long-term compliance with 40 CFR 268.6, which is the portion of the Land Disposal Restrictions of the Hazardous and Solid Waste Amendments to the Resource Conservation and Recovery Act that states the conditions for disposal of specified hazardous wastes. Calculations described here are designed to provide guidance to the WIPP Project by identifying important parameters and helping to recognize processes not yet modeled that may affect compliance. Based on these analyses, performance is sensitive to shaft-seal permeabilities, parameters affecting gas generation, and the conceptual model used for the disturbed rock zone surrounding the excavation. Brine migration is less likely to affect compliance with 40 CFR 268.6 than gas migration. However, results are preliminary, and additional iterations of uncertainty and sensitivity analyses will be required to provide the confidence needed for a defensible compliance evaluation. Specifically, subsequent analyses will explicitly include effects of salt creep and, when conceptual and computational models are available, pressure-dependent fracturing of anhydrite marker beds

  8. Preliminary analysis of knee stress in Full Extension Landing

    Directory of Open Access Journals (Sweden)

    Majid Davoodi Makinejad

    2013-09-01

    Full Text Available OBJECTIVE: This study provides an experimental and finite element analysis of knee-joint structure during extended-knee landing based on the extracted impact force, and it numerically identifies the contact pressure, stress distribution and possibility of bone-to-bone contact when a subject lands from a safe height. METHODS: The impact time and loads were measured via inverse dynamic analysis of free landing without knee flexion from three different heights (25, 50 and 75 cm, using five subjects with an average body mass index of 18.8. Three-dimensional data were developed from computed tomography scans and were reprocessed with modeling software before being imported and analyzed by finite element analysis software. The whole leg was considered to be a fixed middle-hinged structure, while impact loads were applied to the femur in an upward direction. RESULTS: Straight landing exerted an enormous amount of pressure on the knee joint as a result of the body's inability to utilize the lower extremity muscles, thereby maximizing the threat of injury when the load exceeds the height-safety threshold. CONCLUSIONS: The researchers conclude that extended-knee landing results in serious deformation of the meniscus and cartilage and increases the risk of bone-to-bone contact and serious knee injury when the load exceeds the threshold safety height. This risk is considerably greater than the risk of injury associated with walking downhill or flexion landing activities.

  9. Analysis for preliminary evaluation of discrete fracture flow and large-scale permeability in sedimentary rocks

    International Nuclear Information System (INIS)

    Kanehiro, B.Y.; Lai, C.H.; Stow, S.H.

    1987-05-01

    Conceptual models for sedimentary rock settings that could be used in future evaluation and suitability studies are being examined through the DOE Repository Technology Program. One area of concern for the hydrologic aspects of these models is discrete fracture flow analysis as related to the estimation of the size of the representative elementary volume, evaluation of the appropriateness of continuum assumptions and estimation of the large-scale permeabilities of sedimentary rocks. A basis for preliminary analysis of flow in fracture systems of the types that might be expected to occur in low permeability sedimentary rocks is presented. The approach used involves numerical modeling of discrete fracture flow for the configuration of a large-scale hydrologic field test directed at estimation of the size of the representative elementary volume and large-scale permeability. Analysis of fracture data on the basis of this configuration is expected to provide a preliminary indication of the scale at which continuum assumptions can be made

  10. Preliminary RAMI analysis of DFLL TBS for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Dagui [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China); University of Science and Technology of China, Hefei, Anhui, 230031 (China); Yuan, Run [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China); Wang, Jiaqun, E-mail: jiaqun.wang@fds.org.cn [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China); Wang, Fang; Wang, Jin [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China)

    2016-11-15

    Highlights: • We performed the functional analysis of the DFLL TBS. • We performed a failure mode analysis of the DFLL TBS. • We estimated the reliability and availability of the DFLL TBS. • The ITER RAMI approach was applied to the DFLL TBS for technical risk control in the design phase. - Abstract: ITER is the first fusion machine fully designed to prove the physics and technological basis for next fusion power plants. Among the main technical objectives of ITER is to test and validate design concepts of tritium breeding blankets relevant to the fusion power plants. To achieve this goal, China has proposed the dual functional lithium-lead test blanket module (DFLL TBM) concept design. The DFLL TBM and its associated ancillary system were called DFLL TBS. The DFLL TBS play a key role in next fusion reactor. In order to ensure reliable and available of DFLL TBS, the risk control project of DFLL TBS has been put on the schedule. As the stage of the ITER technical risk control policy, the RAMI (Reliability, Availability, Maintainability, Inspectability) approach was used to control the technical risk of ITER. In this paper, the RAMI approach was performed on the conceptual design of DFLL TBS. A functional breakdown was prepared on DFLL TBS, and the system was divided into 3 main functions and 72 basic functions. Based on the result of functional breakdown of DFLL TBS, the reliability block diagrams were prepared to estimate the reliability and availability of each function under the stipulated operating conditions. The inherent availability of the DFLL TBS expected after implementation of mitigation actions was calculated to be 98.57% over 2 years based on the ITER reliability database. A Failure Modes Effects and Criticality Analysis (FMECA) was performed with criticality charts highlighting the risk level of the different failure modes with regard to their probability of occurrence and their effects on the availability.

  11. Preliminary safety analysis for key design features of KALIMER

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, D. H.; Kwon, Y. M.; Chang, W. P.; Suk, S. D.; Lee, S. O.; Lee, Y. B.; Jeong, K. S

    2000-07-01

    KAERI is currently developing the conceptual design of a liquid metal reactor, KALIMER(Korea Advanced Liquid Metal Reactor) under the long-term nuclear R and D program. In this report, descriptions of the KALIMER safety design features and safety analyses results for selected ATWS accidents are presented. First, the basic approach to achieve the safety goal is introduced in chapter 1, and the safety evaluation procedure for the KALIMER design is described in chapter 2. It includes event selection, event categorization, description of design basis events, and beyond design basis events. In chapter 3, results of inherent safety evaluations for the KALIMER conceptual design are presented. The KALIMER core and plant system are designed to assure design performance during a selected set of events without either reactor control or protection system intervention. Safety analyses for the postulated anticipated transient without scram(ATWS) have been performed to investigate the KALIMER system response to the events. They are categorized as bounding events(BEs) because of their low probability of occurrence. In chapter 4, the design of the KALIMER containment dome and the results of its performance analysis are presented. The designs of the existing LMR containment and the KALIMER containment dome have been compared in this chapter. Procedure of the containment performance analysis and the analysis results are described along with the accident scenario and source terms. Finally, a simple methodology is introduced to investigate the core kinetics and hydraulic behavior during HCDA in chapter 5. Mathematical formulations have been developed in the framework of the modified bethe-tait method, and scoping analyses have been performed for the KALIMER core behavior during super-prompt critical excursions.

  12. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  13. DDASAC, Double-Precision Differential or Algebraic Sensitivity Analysis

    International Nuclear Information System (INIS)

    Caracotsios, M.; Stewart, W.E.; Petzold, L.

    1997-01-01

    1 - Description of program or function: DDASAC solves nonlinear initial-value problems involving stiff implicit systems of ordinary differential and algebraic equations. Purely algebraic nonlinear systems can also be solved, given an initial guess within the region of attraction of a solution. Options include automatic reconciliation of inconsistent initial states and derivatives, automatic initial step selection, direct concurrent parametric sensitivity analysis, and stopping at a prescribed value of any user-defined functional of the current solution vector. Local error control (in the max-norm or the 2-norm) is provided for the state vector and can include the sensitivities on request. 2 - Method of solution: Reconciliation of initial conditions is done with a damped Newton algorithm adapted from Bain and Stewart (1991). Initial step selection is done by the first-order algorithm of Shampine (1987), extended here to differential-algebraic equation systems. The solution is continued with the DASSL predictor- corrector algorithm (Petzold 1983, Brenan et al. 1989) with the initial acceleration phase detected and with row scaling of the Jacobian added. The backward-difference formulas for the predictor and corrector are expressed in divide-difference form, and the fixed-leading-coefficient form of the corrector (Jackson and Sacks-Davis 1980, Brenan et al. 1989) is used. Weights for error tests are updated in each step with the user's tolerances at the predicted state. Sensitivity analysis is performed directly on the corrector equations as given by Catacotsios and Stewart (1985) and is extended here to the initialization when needed. 3 - Restrictions on the complexity of the problem: This algorithm, like DASSL, performs well on differential-algebraic systems of index 0 and 1 but not on higher-index systems; see Brenan et al. (1989). The user assigns the work array lengths and the output unit. The machine number range and precision are determined at run time by a

  14. Computer content analysis of schizophrenic speech: a preliminary report.

    Science.gov (United States)

    Tucker, G J; Rosenberg, S D

    1975-06-01

    Computer analysis significantly differtiated the thermatic content of the free speech of 10 schizophrenic patients from that of 10 nonschizophrenic patients and from the content of transcripts of dream material from 10 normal subjects. Schizophrenic patients used the thematic categories in factor 1 (the "schizophrenic factor") 3 times more frequently than the nonschizophrenics and 10 times more frequently than the normal subjects (p smaller than 01). In general, the language content of the schizophrenic patient mirrored an almost agitated attempt to locate oneself in time and space and to defend against internal discomfort and confusion. The authors discuss the implications of this study for future research.

  15. Preliminary report on the PIXE analysis of the squid statoliths

    International Nuclear Information System (INIS)

    Ikeda, Yuzuru; Arai, Nobuaki; Sakamoto, Wataru; Murayama, Tatsuro; Maeda, Kuniko; Yoshida, Koji.

    1996-01-01

    Micro trace elements in the squid statolith, a calcareous stone which acts as a balancer and hearing, was analyzed with Particle Induced X-ray Emission (PIXE) for the Japanese common squid for the first time. Calcium is the main component of the squid statoliths, which means that squid statolith is the pure calcified structure similar to the fish otolith. Beside Ca, Sr was detected with strong dosage, and some other elements as Mn, Fe, Cu, Zn and As were also detected. Possible assumption of intake of microelements to the statoliths and the suitability of PIXE for statoliths analysis are discussed. (author)

  16. Macroalgae as a Biomass Feedstock: A Preliminary Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Roesijadi, Guritno; Jones, Susanne B.; Snowden-Swan, Lesley J.; Zhu, Yunhua

    2010-09-26

    A thorough of macroalgae analysis as a biofuels feedstock is warranted due to the size of this biomass resource and the need to consider all potential sources of feedstock to meet current biomass production goals. Understanding how to harness this untapped biomass resource will require additional research and development. A detailed assessment of environmental resources, cultivation and harvesting technology, conversion to fuels, connectivity with existing energy supply chains, and the associated economic and life cycle analyses will facilitate evaluation of this potentially important biomass resource.

  17. Preliminary safety analysis report for the Waste Characterization Facility

    International Nuclear Information System (INIS)

    1994-10-01

    This safety analysis report outlines the safety concerns associated with the Waste Characterization Facility located in the Radioactive Waste Management Complex at the Idaho National Engineering Laboratory. The three main objectives of the report are to: define and document a safety basis for the Waste Characterization Facility activities; demonstrate how the activities will be carried out to adequately protect the workers, public, and environment; and provide a basis for review and acceptance of the identified risk that the managers, operators, and owners will assume. 142 refs., 38 figs., 39 tabs

  18. Uncertainty and sensitivity analysis of environmental transport models

    International Nuclear Information System (INIS)

    Margulies, T.S.; Lancaster, L.E.

    1985-01-01

    An uncertainty and sensitivity analysis has been made of the CRAC-2 (Calculations of Reactor Accident Consequences) atmospheric transport and deposition models. Robustness and uncertainty aspects of air and ground deposited material and the relative contribution of input and model parameters were systematically studied. The underlying data structures were investigated using a multiway layout of factors over specified ranges generated via a Latin hypercube sampling scheme. The variables selected in our analysis include: weather bin, dry deposition velocity, rain washout coefficient/rain intensity, duration of release, heat content, sigma-z (vertical) plume dispersion parameter, sigma-y (crosswind) plume dispersion parameter, and mixing height. To determine the contributors to the output variability (versus distance from the site) step-wise regression analyses were performed on transformations of the spatial concentration patterns simulated. 27 references, 2 figures, 3 tables

  19. Cross-covariance based global dynamic sensitivity analysis

    Science.gov (United States)

    Shi, Yan; Lu, Zhenzhou; Li, Zhao; Wu, Mengmeng

    2018-02-01

    For identifying the cross-covariance source of dynamic output at each time instant for structural system involving both input random variables and stochastic processes, a global dynamic sensitivity (GDS) technique is proposed. The GDS considers the effect of time history inputs on the dynamic output. In the GDS, the cross-covariance decomposition is firstly developed to measure the contribution of the inputs to the output at different time instant, and an integration of the cross-covariance change over the specific time interval is employed to measure the whole contribution of the input to the cross-covariance of output. Then, the GDS main effect indices and the GDS total effect indices can be easily defined after the integration, and they are effective in identifying the important inputs and the non-influential inputs on the cross-covariance of output at each time instant, respectively. The established GDS analysis model has the same form with the classical ANOVA when it degenerates to the static case. After degeneration, the first order partial effect can reflect the individual effects of inputs to the output variance, and the second order partial effect can reflect the interaction effects to the output variance, which illustrates the consistency of the proposed GDS indices and the classical variance-based sensitivity indices. The MCS procedure and the Kriging surrogate method are developed to solve the proposed GDS indices. Several examples are introduced to illustrate the significance of the proposed GDS analysis technique and the effectiveness of the proposed solution.

  20. Complex finite element sensitivity method for creep analysis

    International Nuclear Information System (INIS)

    Gomez-Farias, Armando; Montoya, Arturo; Millwater, Harry

    2015-01-01

    The complex finite element method (ZFEM) has been extended to perform sensitivity analysis for mechanical and structural systems undergoing creep deformation. ZFEM uses a complex finite element formulation to provide shape, material, and loading derivatives of the system response, providing an insight into the essential factors which control the behavior of the system as a function of time. A complex variable-based quadrilateral user element (UEL) subroutine implementing the power law creep constitutive formulation was incorporated within the Abaqus commercial finite element software. The results of the complex finite element computations were verified by comparing them to the reference solution for the steady-state creep problem of a thick-walled cylinder in the power law creep range. A practical application of the ZFEM implementation to creep deformation analysis is the calculation of the skeletal point of a notched bar test from a single ZFEM run. In contrast, the standard finite element procedure requires multiple runs. The value of the skeletal point is that it identifies the location where the stress state is accurate, regardless of the certainty of the creep material properties. - Highlights: • A novel finite element sensitivity method (ZFEM) for creep was introduced. • ZFEM has the capability to calculate accurate partial derivatives. • ZFEM can be used for identification of the skeletal point of creep structures. • ZFEM can be easily implemented in a commercial software, e.g. Abaqus. • ZFEM results were shown to be in excellent agreement with analytical solutions

  1. Overview of hybrid subspace methods for uncertainty quantification, sensitivity analysis

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Bang, Youngsuk; Wang, Congjian

    2013-01-01

    Highlights: ► We overview the state-of-the-art in uncertainty quantification and sensitivity analysis. ► We overview new developments in above areas using hybrid methods. ► We give a tutorial introduction to above areas and the new developments. ► Hybrid methods address the explosion in dimensionality in nonlinear models. ► Representative numerical experiments are given. -- Abstract: The role of modeling and simulation has been heavily promoted in recent years to improve understanding of complex engineering systems. To realize the benefits of modeling and simulation, concerted efforts in the areas of uncertainty quantification and sensitivity analysis are required. The manuscript intends to serve as a pedagogical presentation of the material to young researchers and practitioners with little background on the subjects. We believe this is important as the role of these subjects is expected to be integral to the design, safety, and operation of existing as well as next generation reactors. In addition to covering the basics, an overview of the current state-of-the-art will be given with particular emphasis on the challenges pertaining to nuclear reactor modeling. The second objective will focus on presenting our own development of hybrid subspace methods intended to address the explosion in the computational overhead required when handling real-world complex engineering systems.

  2. Control strategies and sensitivity analysis of anthroponotic visceral leishmaniasis model.

    Science.gov (United States)

    Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh

    2017-12-01

    This study proposes a mathematical model of Anthroponotic visceral leishmaniasis epidemic with saturated infection rate and recommends different control strategies to manage the spread of this disease in the community. To do this, first, a model formulation is presented to support these strategies, with quantifications of transmission and intervention parameters. To understand the nature of the initial transmission of the disease, the reproduction number [Formula: see text] is obtained by using the next-generation method. On the basis of sensitivity analysis of the reproduction number [Formula: see text], four different control strategies are proposed for managing disease transmission. For quantification of the prevalence period of the disease, a numerical simulation for each strategy is performed and a detailed summary is presented. Disease-free state is obtained with the help of control strategies. The threshold condition for globally asymptotic stability of the disease-free state is found, and it is ascertained that the state is globally stable. On the basis of sensitivity analysis of the reproduction number, it is shown that the disease can be eradicated by using the proposed strategies.

  3. Sensitivity Analysis in Observational Research: Introducing the E-Value.

    Science.gov (United States)

    VanderWeele, Tyler J; Ding, Peng

    2017-08-15

    Sensitivity analysis is useful in assessing how robust an association is to potential unmeasured or uncontrolled confounding. This article introduces a new measure called the "E-value," which is related to the evidence for causality in observational studies that are potentially subject to confounding. The E-value is defined as the minimum strength of association, on the risk ratio scale, that an unmeasured confounder would need to have with both the treatment and the outcome to fully explain away a specific treatment-outcome association, conditional on the measured covariates. A large E-value implies that considerable unmeasured confounding would be needed to explain away an effect estimate. A small E-value implies little unmeasured confounding would be needed to explain away an effect estimate. The authors propose that in all observational studies intended to produce evidence for causality, the E-value be reported or some other sensitivity analysis be used. They suggest calculating the E-value for both the observed association estimate (after adjustments for measured confounders) and the limit of the confidence interval closest to the null. If this were to become standard practice, the ability of the scientific community to assess evidence from observational studies would improve considerably, and ultimately, science would be strengthened.

  4. Nordic reference study on uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Hirschberg, S.; Jacobsson, P.; Pulkkinen, U.; Porn, K.

    1989-01-01

    This paper provides a review of the first phase of Nordic reference study on uncertainty and sensitivity analysis. The main objective of this study is to use experiences form previous Nordic Benchmark Exercises and reference studies concerning critical modeling issues such as common cause failures and human interactions, and to demonstrate the impact of associated uncertainties on the uncertainty of the investigated accident sequence. This has been done independently by three working groups which used different approaches to modeling and to uncertainty analysis. The estimated uncertainty interval for the analyzed accident sequence is large. Also the discrepancies between the groups are substantial but can be explained. Sensitivity analyses which have been carried out concern e.g. use of different CCF-quantification models, alternative handling of CCF-data, time windows for operator actions and time dependences in phase mission operation, impact of state-of-knowledge dependences and ranking of dominating uncertainty contributors. Specific findings with respect to these issues are summarized in the paper

  5. Simple Sensitivity Analysis for Orion Guidance Navigation and Control

    Science.gov (United States)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  6. Deterministic sensitivity analysis for the numerical simulation of contaminants transport

    International Nuclear Information System (INIS)

    Marchand, E.

    2007-12-01

    The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)

  7. Sensitivity Analysis of a Riparian Vegetation Growth Model

    Directory of Open Access Journals (Sweden)

    Michael Nones

    2016-11-01

    Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.

  8. Procedures for uncertainty and sensitivity analysis in repository performance assessment

    International Nuclear Information System (INIS)

    Poern, K.; Aakerlund, O.

    1985-10-01

    The objective of the project was mainly a literature study of available methods for the treatment of parameter uncertainty propagation and sensitivity aspects in complete models such as those concerning geologic disposal of radioactive waste. The study, which has run parallel with the development of a code package (PROPER) for computer assisted analysis of function, also aims at the choice of accurate, cost-affective methods for uncertainty and sensitivity analysis. Such a choice depends on several factors like the number of input parameters, the capacity of the model and the computer reresources required to use the model. Two basic approaches are addressed in the report. In one of these the model of interest is directly simulated by an efficient sampling technique to generate an output distribution. Applying the other basic method the model is replaced by an approximating analytical response surface, which is then used in the sampling phase or in moment matching to generate the output distribution. Both approaches are illustrated by simple examples in the report. (author)

  9. Global sensitivity analysis for models with spatially dependent outputs

    International Nuclear Information System (INIS)

    Iooss, B.; Marrel, A.; Jullien, M.; Laurent, B.

    2011-01-01

    The global sensitivity analysis of a complex numerical model often calls for the estimation of variance-based importance measures, named Sobol' indices. Meta-model-based techniques have been developed in order to replace the CPU time-expensive computer code with an inexpensive mathematical function, which predicts the computer code output. The common meta-model-based sensitivity analysis methods are well suited for computer codes with scalar outputs. However, in the environmental domain, as in many areas of application, the numerical model outputs are often spatial maps, which may also vary with time. In this paper, we introduce an innovative method to obtain a spatial map of Sobol' indices with a minimal number of numerical model computations. It is based upon the functional decomposition of the spatial output onto a wavelet basis and the meta-modeling of the wavelet coefficients by the Gaussian process. An analytical example is presented to clarify the various steps of our methodology. This technique is then applied to a real hydrogeological case: for each model input variable, a spatial map of Sobol' indices is thus obtained. (authors)

  10. Multivariate Sensitivity Analysis of Time-of-Flight Sensor Fusion

    Science.gov (United States)

    Schwarz, Sebastian; Sjöström, Mårten; Olsson, Roger

    2014-09-01

    Obtaining three-dimensional scenery data is an essential task in computer vision, with diverse applications in various areas such as manufacturing and quality control, security and surveillance, or user interaction and entertainment. Dedicated Time-of-Flight sensors can provide detailed scenery depth in real-time and overcome short-comings of traditional stereo analysis. Nonetheless, they do not provide texture information and have limited spatial resolution. Therefore such sensors are typically combined with high resolution video sensors. Time-of-Flight Sensor Fusion is a highly active field of research. Over the recent years, there have been multiple proposals addressing important topics such as texture-guided depth upsampling and depth data denoising. In this article we take a step back and look at the underlying principles of ToF sensor fusion. We derive the ToF sensor fusion error model and evaluate its sensitivity to inaccuracies in camera calibration and depth measurements. In accordance with our findings, we propose certain courses of action to ensure high quality fusion results. With this multivariate sensitivity analysis of the ToF sensor fusion model, we provide an important guideline for designing, calibrating and running a sophisticated Time-of-Flight sensor fusion capture systems.

  11. Hydrocoin level 3 - Testing methods for sensitivity/uncertainty analysis

    International Nuclear Information System (INIS)

    Grundfelt, B.; Lindbom, B.; Larsson, A.; Andersson, K.

    1991-01-01

    The HYDROCOIN study is an international cooperative project for testing groundwater hydrology modelling strategies for performance assessment of nuclear waste disposal. The study was initiated in 1984 by the Swedish Nuclear Power Inspectorate and the technical work was finalised in 1987. The participating organisations are regulatory authorities as well as implementing organisations in 10 countries. The study has been performed at three levels aimed at studying computer code verification, model validation and sensitivity/uncertainty analysis respectively. The results from the first two levels, code verification and model validation, have been published in reports in 1988 and 1990 respectively. This paper focuses on some aspects of the results from Level 3, sensitivity/uncertainty analysis, for which a final report is planned to be published during 1990. For Level 3, seven test cases were defined. Some of these aimed at exploring the uncertainty associated with the modelling results by simply varying parameter values and conceptual assumptions. In other test cases statistical sampling methods were applied. One of the test cases dealt with particle tracking and the uncertainty introduced by this type of post processing. The amount of results available is substantial although unevenly spread over the test cases. It has not been possible to cover all aspects of the results in this paper. Instead, the different methods applied will be illustrated by some typical analyses. 4 figs., 9 refs

  12. Preliminary uranium enrichment analysis results using cadmium zinc telluride detectors

    International Nuclear Information System (INIS)

    Lavietes, A.D.; McQuaid, J.H.; Paulus, T.J.

    1995-01-01

    Lawrence Livermore National Laboratory (LLNL) and EG ampersand G ORTEC have jointly developed a portable ambient-temperature detection system that can be used in a number of application scenarios. The detection system uses a planar cadmium zinc telluride (CZT) detector with custom-designed detector support electronics developed at LLNL and is based on the recently released MicroNOMAD multichannel analyzer (MCA) produced by ORTEC. Spectral analysis is performed using software developed at LLNL that was originally designed for use with high-purity germanium (HPGe) detector systems. In one application, the CZT detection system determines uranium enrichments ranging from less than 3% to over 75% to within accuracies of 20%. The analysis was performed using sample sizes of 200 g or larger and acquisition times of 30 min. The authors have demonstrated the capabilities of this system by analyzing the spectra gathered by the CZT detection system from uranium sources of several enrichments. These experiments demonstrate that current CZT detectors can, in some cases, approach performance criteria that were previously the exclusive domain of larger HPGe detector systems

  13. Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.

    Science.gov (United States)

    Kiparissides, A; Hatzimanikatis, V

    2017-01-01

    The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier

  14. Yucca Mountain transportation routes: Preliminary characterization and risk analysis

    International Nuclear Information System (INIS)

    Souleyrette, R.R. II; Sathisan, S.K.; di Bartolo, R.

    1991-01-01

    In this study, rail and highway routes which may be used for shipments of high-level nuclear waste to a proposed repository at Yucca Mountain, Nevada are characterized. This characterization facilitates three types of impact analysis: comparative study, limited worst-case assessment, and more sophisticated probabilistic risk assessment techniques. Data for relative and absolute impact measures are provided to support comparisons of routes based on selected characteristics. A worst-case scenario assessment is included to determine potentially critical and most likely places for accidents or incidents to occur. The assessment facilitated by the data in this study is limited because impact measures are restricted to the identification of potential areas or persons affected. No attempt is made to quantify the magnitude of these impacts. Most likely locations for accidents to occur are determined relative to other locations within the scope of this study. Independent factors and historical trends used to identify these likely locations are only proxies for accident probability

  15. City of Hoboken Energy Surety Analysis: Preliminary Design Summary

    Energy Technology Data Exchange (ETDEWEB)

    Stamp, Jason Edwin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Military and Energy Systems Analysis Dept.; Baca, Michael J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Military and Energy Systems Analysis Dept.; Munoz-Ramos, Karina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Military and Energy Systems Analysis Dept.; Schenkman, Benjamin L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Military and Energy Systems Analysis Dept.; Eddy, John P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Systems Readiness and Sustainment Technology Dept.; Smith, Mark A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Systems Readiness and Sustainment Technology Dept.; Guttromson, Ross [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Electric Power Systems Research Dept.; Henry, Jordan M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Critical Infrastructure Systems Dept.; Jensen, Richard Pearson [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Geomechanics Dept.

    2014-09-01

    In 2012, Hurricane Sandy devastated much of the U.S. northeast coastal areas. Among those hardest hit was the small community of Hoboken, New Jersey, located on the banks of the Hudson River across from Manhattan. This report describes a city-wide electrical infrastructure design that uses microgrids and other infrastructure to ensure the city retains functionality should such an event occur in the future. The designs ensure that up to 55 critical buildings will retain power during blackout or flooded conditions and include analysis for microgrid architectures, performance parameters, system control, renewable energy integration, and financial opportunities (while grid connected). The results presented here are not binding and are subject to change based on input from the Hoboken stakeholders, the integrator selected to manage and implement the microgrid, or other subject matter experts during the detailed (final) phase of the design effort.

  16. The Σ − D relation for planetary nebulae: Preliminary analysis

    Directory of Open Access Journals (Sweden)

    Urošević D.

    2007-01-01

    Full Text Available An analysis of the relation between radio surface brightness and diameter, so-called Σ − D relation, for planetary nebulae (PNe is presented: i the theoretical Σ − D relation for the evolution of bremsstrahlung surface brightness is derived; ii contrary to the results obtained earlier for the Galactic supernova remnant (SNR samples, our results show that the updated sample of Galactic PNe does not severely suffer from volume selection effect - Malmquist bias (same as for the extragalactic SNR samples and; iii we conclude that the empirical S − D relation for PNe derived in this paper is not useful for valid determination of distances for all observed PNe with unknown distances. .

  17. Sensitivity analysis for modules for various biosphere types

    International Nuclear Information System (INIS)

    Karlsson, Sara; Bergstroem, U.; Rosen, K.

    2000-09-01

    This study presents the results of a sensitivity analysis for the modules developed earlier for calculation of ecosystem specific dose conversion factors (EDFs). The report also includes a comparison between the probabilistically calculated mean values of the EDFs and values gained in deterministic calculations. An overview of the distribution of radionuclides between different environmental parts in the models is also presented. The radionuclides included in the study were 36 Cl, 59 Ni, 93 Mo, 129 I, 135 Cs, 237 Np and 239 Pu, sel to represent various behaviour in the biosphere and some are of particular importance from the dose point of view. The deterministic and probabilistic EDFs showed a good agreement, for most nuclides and modules. Exceptions from this occurred if very skew distributions were used for parameters of importance for the results. Only a minor amount of the released radionuclides were present in the model compartments for all modules, except for the agricultural land module. The differences between the radionuclides were not pronounced which indicates that nuclide specific parameters were of minor importance for the retention of radionuclides for the simulated time period of 10 000 years in those modules. The results from the agricultural land module showed a different pattern. Large amounts of the radionuclides were present in the solid fraction of the saturated soil zone. The high retention within this compartment makes the zone a potential source for future exposure. Differences between the nuclides due to element specific Kd-values could be seen. The amount of radionuclides present in the upper soil layer, which is the most critical zone for exposure to humans, was less then 1% for all studied radionuclides. The sensitivity analysis showed that the physical/chemical parameters were the most important in most modules in contrast to the dominance of biological parameters in the uncertainty analysis. The only exception was the well module where

  18. A Preliminary Tsunami Vulnerability Analysis for Yenikapi Region in Istanbul

    Science.gov (United States)

    Ceren Cankaya, Zeynep; Suzen, Lutfi; Cevdet Yalciner, Ahmet; Kolat, Cagil; Aytore, Betul; Zaytsev, Andrey

    2015-04-01

    One of the main requirements during post disaster recovery operations is to maintain proper transportation and fluent communication at the disaster areas. Ports and harbors are the main transportation hubs which must work with proper performance at all times especially after the disasters. Resilience of coastal utilities after earthquakes and tsunamis have major importance for efficient and proper rescue and recovery operations soon after the disasters. Istanbul is a mega city with its various coastal utilities located at the north coast of the Sea of Marmara. At Yenikapi region of Istanbul, there are critical coastal utilities and vulnerable coastal structures and critical activities occur daily. Fishery ports, commercial ports, small craft harbors, passenger terminals of intercity maritime transportation, water front commercial and/or recreational structures are some of the examples of coastal utilization which are vulnerable against marine disasters. Therefore their vulnerability under tsunami or any other marine hazard to Yenikapi region of Istanbul is an important issue. In this study, a methodology of vulnerability analysis under tsunami attack is proposed with the applications to Yenikapi region. In the study, high resolution (1m) GIS database of Istanbul Metropolitan Municipality (IMM) is used and analyzed by using GIS implementation. The bathymetry and topography database and the vector dataset containing all buildings/structures/infrastructures in the study area are obtained for tsunami numerical modeling for the study area. GIS based tsunami vulnerability assessment is conducted by applying the Multi-criteria Decision Making Analysis (MCDA). The tsunami parameters from deterministically defined worst case scenarios are computed from the simulations using tsunami numerical model NAMI DANCE. The vulnerability parameters in the region due to two different classifications i) vulnerability of buildings/structures and ii) vulnerability of (human) evacuation

  19. Preliminary analysis of Psoroptes ovis transcriptome in different developmental stages

    Directory of Open Access Journals (Sweden)

    Man-Li He

    2016-11-01

    Full Text Available Abstract Background Psoroptic mange is a chronic, refractory, contagious and infectious disease mainly caused by the mange mite Psoroptes ovis, which can infect horses, sheep, buffaloes, rabbits, other domestic animals, deer, wild camels, foxes, minks, lemurs, alpacas, elks and other wild animals. Features of the disease include intense pruritus and dermatitis, depilation and hyperkeratosis, which ultimately result in emaciation or death caused by secondary bacterial infections. The infestation is usually transmitted by close contact between animals. Psoroptic mange is widespread in the world. In this paper, the transcriptome of P. ovis is described following sequencing and analysis of transcripts from samples of larvae (i.e. the Pso_L group and nymphs and adults (i.e. the Pso_N_A group. The study describes differentially expressed genes (DEGs and genes encoding allergens, which help understanding the biology of P. ovis and lay foundations for the development of vaccine antigens and drug target screening. Methods The transcriptome of P. ovis was assembled and analyzed using bioinformatic tools. The unigenes of P. ovis from each developmental stage and the unigenes differentially between developmental stages were compared with allergen protein sequences contained in the allergen database website to predict potential allergens. Results We identified 38,836 unigenes, whose mean length was 825 bp. On the basis of sequence similarity with seven databases, a total of 17,366 unigenes were annotated. A total of 1,316 DEGs were identified, including 496 upregulated and 820 downregulated in the Pso_L group compared with the Pso_N_A group. We predicted 205 allergens genes in the two developmental stages similar to genes from other mites and ticks, of these, 14 were among the upregulated DEGs and 26 among the downregulated DEGs. Conclusion This study provides a reference transcriptome of P. ovis in absence of a reference genome. The analysis of DEGs and

  20. Preliminary analysis of a new IAEA lichen AQCS material

    International Nuclear Information System (INIS)

    Grass, F.; Bichler, M.; Dorner, J.; Ismail, S.; Kregshammer, P.; Zamini, S.; Gwozdz, R.

    2000-01-01

    Lichen with a higher content on interesting trace elements were analyzed by activation analysis and by X-RF measurements on pressed lichen samples. The activation analyses were performed in three different ways: Short-time AA in the Fast Irradiation and Measurement System. Up to 580mg of lichen were irradiated 5-300s in polyethylene containers. Single spectra and spectra of 6 samples were summed up and evaluated. Longer irradiation at the ASTRA-Reactor: 2h at 8E13/s cm 2 . 100-150mg of lichen were irradiated in quartz suprasil vials. Longer irradiation at the Institute's TRIGA-Reactor: 6-7h at 1.8E12/s cm 2 , sample size: 7-48g of lichen were irradiated in polyethylene containers and after irradiation transferred to new measurement containers and measured in a device constructed by Gwozdz. The X-RF analysis was performed with a Spectrace 5000 energy dispersive X-ray fluorescence analyzer with a rhodium anode tube for excitation. From the activation analyses, the following elements were determined: Ag, Al, As, Au, Ba, Br, Ca, Cd, Ce, Cl, Co, Cr, Cs, Cu, Dy, Eu, Fe, Hf, Hg, I, K, La, Lu, Mg, Mn, Mo, Na, Nd, Ni, Rb, Sb, Sc, Se, Sr, Ta, Tb, Th, Ti, U, V, Yb, Zn. From the X-RF measurements, the elements Ag, Al, Ba, Br, Ca, Cd, Cu, Fe, I, K, Mg, Mn, P, Pb, Rb, S, Sb, Si, Sn, Sr, Ti, Y, Zn, and Zr were evaluated. From the X-RF data as well as from the AA-data of samples of different weight it is apparent that milling to a particle size of 200m is not sufficient for all elements, especially not for gold, cadmium, and cobalt which may be present as nuggets or accessory heavy minerals. It is therefore advisable to mill the sample to a particle size which is an order of magnitude smaller and remove the not adhering dust, even if this lowers the content of these elements. (author)

  1. Preliminary results of standard quantitative analysis by ED-XRF

    Energy Technology Data Exchange (ETDEWEB)

    Lara, Alessandro L. de; Franca, Alana C.; Neto, Manoel R.M.; Paschuk, Sergei A., E-mail: alellara@hotmail.com [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil). Dept. de Fisica; Denyak, Valeriy, E-mail: denyak@gmail.com [Instituto de Pesquisa Pele Pequeno Principe (IPPP), Curitiba, PR (Brazil)

    2013-07-01

    A comparison between the results of elemental concentration proposed by XRS-FP software , using data obtained by EDXRF technique with those obtained by stoichiometric calculation was performed. For this purpose, five standard samples of known compounds were produced: two lead- oxide, magnesium chloride and iodine in controlled amounts. The compounds were subsequently mixed and compressed to form tablets. The samples were irradiated in three endpoints, according to an orientation. The measurements were performed at the Laboratory for Radiological UTFPR using X-123SDD the Amptek detector and X-ray tube with silver target from the same manufacturer. The operating conditions tube were 05μA current to a 40 kV voltage. Finally, the 15 spectra analyzed with the software to determine the concentration of chlorine, iodine and lead. Data from this analysis were compared with results expected in stoichiometric calculations. The data provided by the program, a convergence of results, indicating homogeneity of the samples was found. Compared to the stoichiometric calculation, a considerable discrepancy that may be the result of a misconfiguration or contamination of the sample was found. At the end, we created a proposal for continuation of the work using an auxiliary calculation should be developed in the next step.

  2. A preliminary study of DTI Fingerprinting on stroke analysis.

    Science.gov (United States)

    Ma, Heather T; Ye, Chenfei; Wu, Jun; Yang, Pengfei; Chen, Xuhui; Yang, Zhengyi; Ma, Jingbo

    2014-01-01

    DTI (Diffusion Tensor Imaging) is a well-known MRI (Magnetic Resonance Imaging) technique which provides useful structural information about human brain. However, the quantitative measurement to physiological variation of subtypes of ischemic stroke is not available. An automatically quantitative method for DTI analysis will enhance the DTI application in clinics. In this study, we proposed a DTI Fingerprinting technology to quantitatively analyze white matter tissue, which was applied in stroke classification. The TBSS (Tract Based Spatial Statistics) method was employed to generate mask automatically. To evaluate the clustering performance of the automatic method, lesion ROI (Region of Interest) is manually drawn on the DWI images as a reference. The results from the DTI Fingerprinting were compared with those obtained from the reference ROIs. It indicates that the DTI Fingerprinting could identify different states of ischemic stroke and has promising potential to provide a more comprehensive measure of the DTI data. Further development should be carried out to improve DTI Fingerprinting technology in clinics.

  3. FFTF vertical sodium storage tank preliminary thermal analysis

    International Nuclear Information System (INIS)

    Irwin, J.J.

    1995-01-01

    In the FFTF Shutdown Program, sodium from the primary and secondary heat transport loops, Interim Decay Storage (IDS), and Fuel Storage Facility (FSF) will be transferred to four large storage tanks for temporary storage. Three of the storage tanks will be cylindrical vertical tanks having a diameter of 28 feet, height of 22 feet and fabricated from carbon steel. The fourth tank is a horizontal cylindrical tank but is not the subject of this report. The storage tanks will be located near the FFTF in the 400 Area and rest on a steel-lined concrete slab in an enclosed building. The purpose of this work is to document the thermal analyses that were performed to ensure that the vertical FFTF sodium storage tank design is feasible from a thermal standpoint. The key criterion for this analysis is the time to heat up the storage tank containing frozen sodium at ambient temperature to 400 F. Normal operating conditions include an ambient temperature range of 32 F to 120 F. A key parameter in the evaluation of the sodium storage tank is the type of insulation. The baseline case assumed six inches of calcium silicate insulation. An alternate case assumed refractory fiber (Cerablanket) insulation also with a thickness of six inches. Both cases assumed a total electrical trace heat load of 60 kW, with 24 kW evenly distributed on the bottom head and 36 kW evenly distributed on the tank side wall

  4. Social network analysis in identifying influential webloggers: A preliminary study

    Science.gov (United States)

    Hasmuni, Noraini; Sulaiman, Nor Intan Saniah; Zaibidi, Nerda Zura

    2014-12-01

    In recent years, second generation of internet-based services such as weblog has become an effective communication tool to publish information on the Web. Weblogs have unique characteristics that deserve users' attention. Some of webloggers have seen weblogs as appropriate medium to initiate and expand business. These webloggers or also known as direct profit-oriented webloggers (DPOWs) communicate and share knowledge with each other through social interaction. However, survivability is the main issue among DPOW. Frequent communication with influential webloggers is one of the way to keep survive as DPOW. This paper aims to understand the network structure and identify influential webloggers within the network. Proper understanding of the network structure can assist us in knowing how the information is exchanged among members and enhance survivability among DPOW. 30 DPOW were involved in this study. Degree centrality and betweenness centrality measurement in Social Network Analysis (SNA) were used to examine the strength relation and identify influential webloggers within the network. Thus, webloggers with the highest value of these measurements are considered as the most influential webloggers in the network.

  5. Preliminary results of standard quantitative analysis by ED-XRF

    International Nuclear Information System (INIS)

    Lara, Alessandro L. de; Franca, Alana C.; Neto, Manoel R.M.; Paschuk, Sergei A.

    2013-01-01

    A comparison between the results of elemental concentration proposed by XRS-FP software , using data obtained by EDXRF technique with those obtained by stoichiometric calculation was performed. For this purpose, five standard samples of known compounds were produced: two lead- oxide, magnesium chloride and iodine in controlled amounts. The compounds were subsequently mixed and compressed to form tablets. The samples were irradiated in three endpoints, according to an orientation. The measurements were performed at the Laboratory for Radiological UTFPR using X-123SDD the Amptek detector and X-ray tube with silver target from the same manufacturer. The operating conditions tube were 05μA current to a 40 kV voltage. Finally, the 15 spectra analyzed with the software to determine the concentration of chlorine, iodine and lead. Data from this analysis were compared with results expected in stoichiometric calculations. The data provided by the program, a convergence of results, indicating homogeneity of the samples was found. Compared to the stoichiometric calculation, a considerable discrepancy that may be the result of a misconfiguration or contamination of the sample was found. At the end, we created a proposal for continuation of the work using an auxiliary calculation should be developed in the next step

  6. Preliminary analysis of space mission applications for electromagnetic launchers

    Science.gov (United States)

    Miller, L. A.; Rice, E. E.; Earhart, R. W.; Conlon, R. J.

    1984-01-01

    The technical and economic feasibility of using electromagnetically launched EML payloads propelled from the Earth's surface to LEO, GEO, lunar orbit, or to interplanetary space was assessed. Analyses of the designs of rail accelerators and coaxial magnetic accelerators show that each is capable of launching to space payloads of 800 KG or more. A hybrid launcher in which EML is used for the first 2 KM/sec followed by chemical rocket stages was also tested. A cost estimates study shows that one to two EML launches per day are needed to break even, compared to a four-stage rocket. Development models are discussed for: (1) Earth orbital missions; (2) lunar base supply mission; (3) solar system escape mission; (4) Earth escape missions; (5) suborbital missions; (6) electromagnetic boost missions; and (7) space-based missions. Safety factors, environmental impacts, and EML systems analysis are discussed. Alternate systems examined include electrothermal thrustors, an EML rocket gun; an EML theta gun, and Soviet electromagnetic accelerators.

  7. A simplified procedure of linear regression in a preliminary analysis

    Directory of Open Access Journals (Sweden)

    Silvia Facchinetti

    2013-05-01

    Full Text Available The analysis of a statistical large data-set can be led by the study of a particularly interesting variable Y – regressed – and an explicative variable X, chosen among the remained variables, conjointly observed. The study gives a simplified procedure to obtain the functional link of the variables y=y(x by a partition of the data-set into m subsets, in which the observations are synthesized by location indices (mean or median of X and Y. Polynomial models for y(x of order r are considered to verify the characteristics of the given procedure, in particular we assume r= 1 and 2. The distributions of the parameter estimators are obtained by simulation, when the fitting is done for m= r + 1. Comparisons of the results, in terms of distribution and efficiency, are made with the results obtained by the ordinary least square methods. The study also gives some considerations on the consistency of the estimated parameters obtained by the given procedure.

  8. Preliminary Analysis of Slope Stability in Kuok and Surrounding Areas

    Directory of Open Access Journals (Sweden)

    Dewandra Bagus Eka Putra

    2016-12-01

    Full Text Available The level of slope influenced by the condition of the rocks beneath the surface. On high level of slopes, amount of surface runoff and water transport energy is also enlarged. This caused by greater gravity, in line with the surface tilt from the horizontal plane. In other words, topsoil eroded more and more. When the slope becomes twice as steep, then the amount of erosion per unit area be 2.0 - 2.5 times more. Kuok and surrounding area is the road access between the West Sumatra and Riau which plays an important role economies of both provinces. The purpose of this study is to map the locations that have fairly steep slopes and potential mode of landslides. Based on SRTM data obtained,  the roads in Kuok area has a minimum elevation of + 33 m and a maximum  + 217.329 m. Rugged road conditions with slope ranging from 24.08 ° to 44.68 ° causing this area having frequent landslides. The result of slope stability analysis in a slope near the Water Power Plant Koto Panjang, indicated that mode of active failure is toppling failure or rock fall and the potential zone of failure is in the center part of the slope.

  9. Electrical field of electrical appliances versus distance: A preliminary analysis

    International Nuclear Information System (INIS)

    Mustafa, Nur Badariah Ahmad; Nordin, Farah Hani; Ismail, Fakaruddin Ali Ahmad; Alkahtani, Ammar Ahmed; Balasubramaniam, Nagaletchumi; Hock, Goh Chin; Shariff, Z A M

    2013-01-01

    Every household electrical appliance that is plugged in emits electric field even if it is not operating. The source where the appliance is plugged into and the components of household electrical appliance contribute to electric field emission. The electric field may cause unknown disturbance to the environment or also affect the human health and the effect might depends on the strength of the electric field emitted by the appliance. This paper will investigate the strength of the electric field emitted by four different electrical appliances using spectrum analyser. The strength will be captured at three different distances; (i) 1m (ii) 2m and (iii) 3m and analysis of the strength of the electrical field is done based on the three different distances. The measurement results show that the strength of the electric field is strongest when it is captured at 1m and the weakest at 3m from the electrical appliance. The results proved that the farther an object is located from the electrical appliance; the less effect the magnetic field has.

  10. Gas cooled fast reactor 2400 MWTh, status on the conceptual design studies and preliminary safety analysis

    International Nuclear Information System (INIS)

    Malo, J.Y.; Alpy, N.; Bentivoglio, F.

    2009-01-01

    The Gas cooled Fast Reactor (GFR) is considered by the French Commissariat a l'Energie Atomique as a promising concept, combining the benefits of fast spectrum and high temperature, using Helium as coolant. A status on the GFR preliminary viability was made at the end of 2007, ending the pre-conceptual design phase. A consistent overall systems arrangement was proposed and a preliminary safety analysis based on operating transient calculations and a simplified PSA had established a global confidence in the feasibility and safety of this baseline concept. Its potential for attractive performances had been pointed out. Compare to the more mature Sodium Fast Reactor technology, no demonstrator has ever been built and the feasibility demonstration will required a longer lead time. The next main project milestone is related to the GFR viability, scheduled in 2012. The current studies consist in revisiting the reactor reference design options as selected at the end of 2007. Most of them are being consolidated by going more in depth in the analysis. Some possible alternatives are assessed. The paper will give a status on the last studies performed on the core design and corresponding neutronics and cycle performance, the Decay Heat Removal strategy and preliminary safety analysis, systems design and balance of plant... This paper is complementary to the Icapp'09 papers 9062 dealing with the Gas cooled Fast Reactor Demonstrator ALLEGRO and 9378 related to GFR transients analysis. (author)

  11. Sensitivity analysis of the terrestrial food chain model FOOD III

    International Nuclear Information System (INIS)

    Zach, Reto.

    1980-10-01

    As a first step in constructing a terrestrial food chain model suitable for long-term waste management situations, a numerical sensitivity analysis of FOOD III was carried out to identify important model parameters. The analysis involved 42 radionuclides, four pathways, 14 food types, 93 parameters and three percentages of parameter variation. We also investigated the importance of radionuclides, pathways and food types. The analysis involved a simple contamination model to render results from individual pathways comparable. The analysis showed that radionuclides vary greatly in their dose contribution to each of the four pathways, but relative contributions to each pathway are very similar. Man's and animals' drinking water pathways are much more important than the leaf and root pathways. However, this result depends on the contamination model used. All the pathways contain unimportant food types. Considering the number of parameters involved, FOOD III has too many different food types. Many of the parameters of the leaf and root pathway are important. However, this is true for only a few of the parameters of animals' drinking water pathway, and for neither of the two parameters of mans' drinking water pathway. The radiological decay constant increases the variability of these results. The dose factor is consistently the most important variable, and it explains most of the variability of radionuclide doses within pathways. Consideration of the variability of dose factors is important in contemporary as well as long-term waste management assessment models, if realistic estimates are to be made. (auth)

  12. Uncertainty and sensitivity analysis in nuclear accident consequence assessment

    International Nuclear Information System (INIS)

    Karlberg, Olof.

    1989-01-01

    This report contains the results of a four year project in research contracts with the Nordic Cooperation in Nuclear Safety and the National Institute for Radiation Protection. An uncertainty/sensitivity analysis methodology consisting of Latin Hypercube sampling and regression analysis was applied to an accident consequence model. A number of input parameters were selected and the uncertainties related to these parameter were estimated within a Nordic group of experts. Individual doses, collective dose, health effects and their related uncertainties were then calculated for three release scenarios and for a representative sample of meteorological situations. From two of the scenarios the acute phase after an accident were simulated and from one the long time consequences. The most significant parameters were identified. The outer limits of the calculated uncertainty distributions are large and will grow to several order of magnitudes for the low probability consequences. The uncertainty in the expectation values are typical a factor 2-5 (1 Sigma). The variation in the model responses due to the variation of the weather parameters is fairly equal to the parameter uncertainty induced variation. The most important parameters showed out to be different for each pathway of exposure, which could be expected. However, the overall most important parameters are the wet deposition coefficient and the shielding factors. A general discussion of the usefulness of uncertainty analysis in consequence analysis is also given. (au)

  13. Robust and sensitive analysis of mouse knockout phenotypes.

    Directory of Open Access Journals (Sweden)

    Natasha A Karp

    Full Text Available A significant challenge of in-vivo studies is the identification of phenotypes with a method that is robust and reliable. The challenge arises from practical issues that lead to experimental designs which are not ideal. Breeding issues, particularly in the presence of fertility or fecundity problems, frequently lead to data being collected in multiple batches. This problem is acute in high throughput phenotyping programs. In addition, in a high throughput environment operational issues lead to controls not being measured on the same day as knockouts. We highlight how application of traditional methods, such as a Student's t-Test or a 2-way ANOVA, in these situations give flawed results and should not be used. We explore the use of mixed models using worked examples from Sanger Mouse Genome Project focusing on Dual-Energy X-Ray Absorptiometry data for the analysis of mouse knockout data and compare to a reference range approach. We show that mixed model analysis is more sensitive and less prone to artefacts allowing the discovery of subtle quantitative phenotypes essential for correlating a gene's function to human disease. We demonstrate how a mixed model approach has the additional advantage of being able to include covariates, such as body weight, to separate effect of genotype from these covariates. This is a particular issue in knockout studies, where body weight is a common phenotype and will enhance the precision of assigning phenotypes and the subsequent selection of lines for secondary phenotyping. The use of mixed models with in-vivo studies has value not only in improving the quality and sensitivity of the data analysis but also ethically as a method suitable for small batches which reduces the breeding burden of a colony. This will reduce the use of animals, increase throughput, and decrease cost whilst improving the quality and depth of knowledge gained.

  14. Clinical registry for rheumatoid arthritis; a preliminary analysis

    International Nuclear Information System (INIS)

    Fakhr, A.; Hakim, F.; Zaidi, S.K.; Sharif, A.

    2017-01-01

    To establish a clinical registry for Rheumatoid Arthritis and delineate the most common symptoms that rheumatoid arthritis (RA) patients experience in our set up. Study Design: Cross sectional study. Place and Duration of Study: Study was carried out at Military Hospital (MH) Rawalpindi at Rheumatology Department during the period of Jan 2013 to Jun 2015. Material and Methods: A clinical registry for Rheumatoid Arthritis was developed as per criteria jointly developed by American College of Rheumatology (ACR) along with European League against Rheumatism (EULAR) (2010). Fifty-eight patients were registered after their informed consent and approval by Military Hospital (MH) Rawalpindi ethical committee. Age, gender and relevant clinical parameters of RA patients were recorded on case report forms and stored for analysis in the RA registry in Excel 2010. The figures were reported in frequencies and percentages. Results: Multiple joint pains (48.28%), fever (24.14%), morning stiffness of joints (22.41%) were the most common symptoms in RA patients. Other clinical manifestations included painful bilateral swollen joints (13.79%), pain in different parts of the body (10.34%), Raynaud's phenomenon (10.34%), malaise (8.62%), swollen body parts (8.62%), ulcers (8.62%), fatigue (6.90%), nodules on skin/elbow/interphalangeal joints (6.90%), deformities of fingers/ hand (3.45%), redness of eyes (3.45%), body rash (3.45%), inability to walk (3.45%), cervical lymphadenopathy (1.72%), stiffness of spine (1.72%) and myalgias (1.72%). Conclusion: It is concluded that multiple joint pains, fever and morning stiffness of joints are the most common symptoms of RA patients. (author)

  15. Advanced analysis of finger-tapping performance: a preliminary study.

    Science.gov (United States)

    Barut, Cağatay; Kızıltan, Erhan; Gelir, Ethem; Köktürk, Fürüzan

    2013-06-01

    The finger-tapping test is a commonly employed quantitative assessment tool used to measure motor performance in the upper extremities. This task is a complex motion that is affected by external stimuli, mood and health status. The complexity of this task is difficult to explain with a single average intertap-interval value (time difference between successive tappings) which only provides general information and neglects the temporal effects of the aforementioned factors. This study evaluated the time course of average intertap-interval values and the patterns of variation in both the right and left hands of right-handed subjects using a computer-based finger-tapping system. Cross sectional study. Thirty eight male individuals aged between 20 and 28 years (Mean±SD = 22.24±1.65) participated in the study. Participants were asked to perform single-finger-tapping test for 10 seconds of test period. Only the results of right-handed (RH) 35 participants were considered in this study. The test records the time of tapping and saves data as the time difference between successive tappings for further analysis. The average number of tappings and the temporal fluctuation patterns of the intertap-intervals were calculated and compared. The variations in the intertap-interval were evaluated with the best curve fit method. An average tapping speed or tapping rate can reliably be defined for a single-finger tapping test by analysing the graphically presented data of the number of tappings within the test period. However, a different presentation of the same data, namely the intertap-interval values, shows temporal variation as the number of tapping increases. Curve fitting applications indicate that the variation has a biphasic nature. The measures obtained in this study reflect the complex nature of the finger-tapping task and are suggested to provide reliable information regarding hand performance. Moreover, the equation reflects both the variations in and the general

  16. Diisocyanate emission from a paint product: a preliminary analysis.

    Science.gov (United States)

    Jarand, Curtis W; Akapo, Samuel O; Swenson, Lonie J; Kelman, Bruce J

    2002-07-01

    Exposure of workers to diisocyanates in the polyurethane foam manufacturing industry is well documented. However, very little quantitative data have been published on exposure to diisocyanates from the use of paints and coatings. The purpose of this study was to evaluate emission of 2,4-toluene diisocyanate, 2,6-toluene diisocyanate (2,6-TDI), and isophorone diisocyanate from a commercially available two-stage concrete coating and sealant. A laboratory model of an outdoor deck coating process was developed and diisocyanate concentrations determined by derivatization with 1-(2-methoxyphenol)-piperazine and subsequent high performance liquid chromatographic analysis with UV detection. The detection limit for 2,4-toluene diisocyanate and 2,6-toluene diisocyanate urea derivatives was 0.6 microg TDI/gm wet product, and 0.54 microg IPDI/gm wet product for the isophorone diisocyanate urea derivative. No 2,4-toluene diisocyanate or isophorone diisocyanate was detected in the mixed product. A maximum mean 2,6-TDI emission rate of 0.32 microg of 2,6-TDI/gram of wet product applied/hour was observed for the 1-hour sampling time, 0.38 microg of 2,6-TDI/gram of wet product applied/hour was observed for the 5-hour sampling time, and 0.02 micrpg of 2,6-TDI/gram of wet product applied/hour was observed for the 15-hour sampling time. The decrease in rate of 2,6-TDI emission over the 15-hour period indicates that emission of 2,6-TDI is virtually complete after 5 hours. These emission rates should allow industrial hygienists to calculate exposures to isocyanates emitted from at least one curing sealant.

  17. Cognitive Task Analysis of Business Jet Pilots' Weather Flying Behaviors: Preliminary Results

    Science.gov (United States)

    Latorella, Kara; Pliske, Rebecca; Hutton, Robert; Chrenka, Jason

    2001-01-01

    This report presents preliminary findings from a cognitive task analysis (CTA) of business aviation piloting. Results describe challenging weather-related aviation decisions and the information and cues used to support these decisions. Further, these results demonstrate the role of expertise in business aviation decision-making in weather flying, and how weather information is acquired and assessed for reliability. The challenging weather scenarios and novice errors identified in the results provide the basis for experimental scenarios and dependent measures to be used in future flight simulation evaluations of candidate aviation weather information systems. Finally, we analyzed these preliminary results to recommend design and training interventions to improve business aviation decision-making with weather information. The primary objective of this report is to present these preliminary findings and to document the extended CTA methodology used to elicit and represent expert business aviator decision-making with weather information. These preliminary findings will be augmented with results from additional subjects using this methodology. A summary of the complete results, absent the detailed treatment of methodology provided in this report, will be documented in a separate publication.

  18. Overview of methods for uncertainty analysis and sensitivity analysis in probabilistic risk assessment

    International Nuclear Information System (INIS)

    Iman, R.L.; Helton, J.C.

    1985-01-01

    Probabilistic Risk Assessment (PRA) is playing an increasingly important role in the nuclear reactor regulatory process. The assessment of uncertainties associated with PRA results is widely recognized as an important part of the analysis process. One of the major criticisms of the Reactor Safety Study was that its representation of uncertainty was inadequate. The desire for the capability to treat uncertainties with the MELCOR risk code being developed at Sandia National Laboratories is indicative of the current interest in this topic. However, as yet, uncertainty analysis and sensitivity analysis in the context of PRA is a relatively immature field. In this paper, available methods for uncertainty analysis and sensitivity analysis in a PRA are reviewed. This review first treats methods for use with individual components of a PRA and then considers how these methods could be combined in the performance of a complete PRA. In the context of this paper, the goal of uncertainty analysis is to measure the imprecision in PRA outcomes of interest, and the goal of sensitivity analysis is to identify the major contributors to this imprecision. There are a number of areas that must be considered in uncertainty analysis and sensitivity analysis for a PRA: (1) information, (2) systems analysis, (3) thermal-hydraulic phenomena/fission product behavior, (4) health and economic consequences, and (5) display of results. Each of these areas and the synthesis of them into a complete PRA are discussed

  19. Sensitivity Analysis Of Financing Demand In Syariah Banking

    Directory of Open Access Journals (Sweden)

    DR. HJ. ROSYETTI

    2017-11-01

    Full Text Available This study aims to analyze the Sensitivity of Demand Financing in syariah banking with a focus on the elasticity of financing demand income elasticity and cross elasticity. The type of data used in this study is secondary data quantitative and time series obtained from the publication of BPS BI and OJK. The data analysis technique begins by estimating multiple linear regression equations using the Eviews Application further measuring the sensitivity using elasticity. The research variables consist of revenue gross domestic product and conventional bank interest rate as independent variables and demand for financing as a dependent variable. The results obtained for the results gross domestic product and interest rate of conventional banks simultaneously affect the demand for financing in Islamic banking with a significant level of 5 obtained probability value F statistic amp945 005. Partially revenue share and gross domestic product have a significant effect on demand for financing. While the variable interest rate of conventional banks partially does not have a significant effect on demand for financing in Islamic banking. The ability of the three independent variables to explain the dependent variable of 99.06 the rest of 0.04 influenced by other factors outside this study. The sensitive value of demand for financing in syariah banking during the observation period was 3.94 amp400P 1 so that it can be said that demand for financing in syariah banking is elastic. The elasticity of income demand for financing in syariah banking during the observation period of 3.08 amp400I 1 is categorized as luxuries goods. The cross elasticity value of financing demand in syariah banking during the observation period is 0.52 or positive amp400C 0 it can be categorized that the interest rate of a conventional bank is a substitute of profit sharing.

  20. Crystallization and preliminary X-ray diffraction analysis of rat autotaxin

    International Nuclear Information System (INIS)

    Day, Jacqueline E.; Hall, Troii; Pegg, Lyle E.; Benson, Timothy E.; Hausmann, Jens; Kamtekar, Satwik

    2010-01-01

    Autotaxin (ATX), a pyrophosphatase/phosphodiesterase enzyme, is a promising drug target for many indications and is only distantly related to enzymes of previously determined structure. Here, the cloning, expression, purification, crystallization and preliminary diffraction analysis of ATX are reported. Rat autotaxin has been cloned, expressed, purified to homogeneity and crystallized via hanging-drop vapour diffusion using PEG 3350 as precipitant and ammonium iodide and sodium thiocyanate as salts. The crystals diffracted to a maximum resolution of 2.05 Å and belonged to space group P1, with unit-cell parameters a = 53.8, b = 63.3, c = 70.5 Å, α = 98.8, β = 106.2, γ = 99.8°. Preliminary X-ray diffraction analysis indicated the presence of one molecule per asymmetric unit, with a solvent content of 47%

  1. Relative risk analysis in regulating the use of radiation-emitting medical devices. A preliminary application

    Energy Technology Data Exchange (ETDEWEB)

    Jones, E.D.; Banks, W.W.; Altenbach, T.J.; Fischer, L.E. [Lawrence Livermore National Lab., CA (United States)

    1995-09-01

    This report describes a preliminary application of an analysis approach for assessing relative risks in the use of radiation- emitting medical devices. Results are presented on human-initiated actions and failure modes that are most likely to occur in the use of the Gamma Knife, a gamma irradiation therapy device. This effort represents an initial step in a US Nuclear Regulatory Commission (NRC) plan to evaluate the potential role of risk analysis in regulating the use of nuclear medical devices. For this preliminary application of risk assessment, the focus was to develop a basic process using existing techniques for identifying the most likely risk contributors and their relative importance. The approach taken developed relative risk rankings and profiles that incorporated the type and quality of data available and could present results in an easily understood form. This work was performed by the Lawrence Livermore National Laboratory for the NRC.

  2. Crystallization and preliminary X-ray diffraction analysis of West Nile virus

    International Nuclear Information System (INIS)

    Kaufmann, Bärbel; Plevka, Pavel; Kuhn, Richard J.; Rossmann, Michael G.

    2010-01-01

    Crystals of infectious West Nile virus were obtained and diffracted at best to about 25 Å resolution. Preliminary analysis of the diffraction pattern suggested tight hexagonal packing of the intact virus. West Nile virus, a human pathogen, is closely related to other medically important flaviviruses of global impact such as dengue virus. The infectious virus was purified from cell culture using polyethylene glycol (PEG) precipitation and density-gradient centrifugation. Thin amorphously shaped crystals of the lipid-enveloped virus were grown in quartz capillaries equilibrated by vapor diffusion. Crystal diffraction extended at best to a resolution of about 25 Å using synchrotron radiation. A preliminary analysis of the diffraction images indicated that the crystals had unit-cell parameters a ≃ b ≃ 480 Å, γ = 120°, suggesting a tight hexagonal packing of one virus particle per unit cell

  3. Preliminary Dynamic Feasibility and Analysis of a Spherical, Wind-Driven (Tumbleweed), Martian Rover

    Science.gov (United States)

    Flick, John J.; Toniolo, Matthew D.

    2005-01-01

    The process and findings are presented from a preliminary feasibility study examining the dynamics characteristics of a spherical wind-driven (or Tumbleweed) rover, which is intended for exploration of the Martian surface. The results of an initial feasibility study involving several worst-case mobility situations that a Tumbleweed rover might encounter on the surface of Mars are discussed. Additional topics include the evaluation of several commercially available analysis software packages that were examined as possible platforms for the development of a Monte Carlo Tumbleweed mission simulation tool. This evaluation lead to the development of the Mars Tumbleweed Monte Carlo Simulator (or Tumbleweed Simulator) using the Vortex physics software package from CM-Labs, Inc. Discussions regarding the development and evaluation of the Tumbleweed Simulator, as well as the results of a preliminary analysis using the tool are also presented. Finally, a brief conclusions section is presented.

  4. Preliminary X-ray analysis of twinned crystals of sarcosine dimethylglycine methyltransferase from Halorhodospira halochoris

    International Nuclear Information System (INIS)

    Kallio, Juha Pekka; Jänis, Janne; Nyyssölä, Antti; Hakulinen, Nina; Rouvinen, Juha

    2009-01-01

    The crystallization and preliminary X-ray diffraction analysis of sarcosine dimethylglycine methyltransferase from H. halochoris is reported. Sarcosine dimethylglycine methyltransferase (EC 2.1.1.157) is an enzyme from the extremely halophilic anaerobic bacterium Halorhodospira halochoris. This enzyme catalyzes the twofold methylation of sarcosine to betaine, with S-adenosylmethionine (AdoMet) as the methyl-group donor. This study presents the crystallization and preliminary X-ray analysis of recombinant sarcosine dimethylglycine methyltransferase produced in Escherichia coli. Mass spectroscopy was used to determine the purity and homogeneity of the enzyme material. Two different crystal forms, which initially appeared to be hexagonal and tetragonal, were obtained. However, on analyzing the diffraction data it was discovered that both crystal forms were pseudo-merohedrally twinned. The true crystal systems were monoclinic and orthorhombic. The monoclinic crystal diffracted to a maximum of 2.15 Å resolution and the orthorhombic crystal diffracted to 1.8 Å resolution

  5. Preliminary Disposal Analysis for Selected Accelerator Production of Tritium Waste Streams

    International Nuclear Information System (INIS)

    Ades, M.J.; England, J.L.

    1998-06-01

    A preliminary analysis was performed for two selected Accelerator Production of Tritium (APT) generated mixed and low-level waste streams to determine if one mixed low-level waste (MLLW) stream that includes the Mixed Waste Lead (MWL) can be disposed of at the Nevada Test Site (NTS) and at the Hanford Site and if one low-level radioactive waste (LLW) stream, that includes the Tungsten waste stream (TWS) generated by the Tungsten Neutron Source modules and used in the Target/Blanket cavity vessel, can be disposed of in the LLW Vaults at the Savannah River Plant (SRP). The preliminary disposal analysis that the radionuclide concentrations of the two selected APT waste streams are not in full compliance with the Waste Acceptance Criteria (WAC) and the Performance Assessment (PA) radionuclide limits of the disposal sites considered

  6. Relative risk analysis in regulating the use of radiation-emitting medical devices. A preliminary application

    International Nuclear Information System (INIS)

    Jones, E.D.; Banks, W.W.; Altenbach, T.J.; Fischer, L.E.

    1995-09-01

    This report describes a preliminary application of an analysis approach for assessing relative risks in the use of radiation- emitting medical devices. Results are presented on human-initiated actions and failure modes that are most likely to occur in the use of the Gamma Knife, a gamma irradiation therapy device. This effort represents an initial step in a US Nuclear Regulatory Commission (NRC) plan to evaluate the potential role of risk analysis in regulating the use of nuclear medical devices. For this preliminary application of risk assessment, the focus was to develop a basic process using existing techniques for identifying the most likely risk contributors and their relative importance. The approach taken developed relative risk rankings and profiles that incorporated the type and quality of data available and could present results in an easily understood form. This work was performed by the Lawrence Livermore National Laboratory for the NRC

  7. National Data Center Preparedness Exercise 2015 (NPE 2015): MY-NDC Preliminary Analysis Result

    International Nuclear Information System (INIS)

    Faisal Izwan Abdul Rashid; Muhammed Zulfakar Zolkaffly

    2016-01-01

    Malaysia has established the CTBT National Data Centre (MY-NDC) in December 2005. MY-NDC is tasked to perform Comprehensive Nuclear-Test-Ban-Treaty (CTBT) data management as well as provide information for Treaty related events to Nuclear Malaysia as CTBT National Authority. In 2015, MY-NDC has participated in the National Data Centre Preparedness Exercise 2015 (NPE 2015). This paper aims at presenting MY-NDC preliminary analysis result of NPE 2015. In NPE 2015, MY-NDC has performed five different analyses, namely, radionuclide, atmospheric transport modelling (ATM), data fusion, seismic analysis and site forensics. The preliminary findings show the hypothetical scenario in NPE 2015 most probably is an uncontained event resulted high release of radionuclide to the air. (author)

  8. Preliminary safety analysis of the HTTR-IS nuclear hydrogen production system

    International Nuclear Information System (INIS)

    Sato, Hiroyuki; Ohashi, Hirofumi; Tazawa, Yujiro; Tachibana, Yukio; Sakaba, Nariaki

    2010-06-01

    Japan Atomic Energy Agency is planning to demonstrate hydrogen production by thermochemical water-splitting IS process utilizing heat from the high-temperature gas-cooled reactor HTTR (HTTR-IS system). The previous study identified that the HTTR modification due to the coupling of hydrogen production plant requires an additional safety review since the scenario and quantitative values of the evaluation items would be altered from the original HTTR safety review. Hence, preliminary safety analyses are conducted by using the system analysis code. Calculation results showed that evaluation items such as a coolant pressure, temperatures of heat transfer tubes at the pressure boundary, etc., did not exceed allowable values. Also, the peak fuel temperature did not exceed allowable value and therefore the reactor core was not damaged and cooled sufficiently. This report compiles calculation conditions, event scenarios and the calculation results of the preliminary safety analysis. (author)

  9. Preliminary Hazards Analysis of K-Basin Fuel Encapsulation and Storage

    International Nuclear Information System (INIS)

    Strickland, G.C.

    1994-01-01

    This Preliminary Hazards Analysis (PHA) systematically examines the K-Basin facilities and their supporting systems for hazards created by abnormal operating conditions and external events (e.g., earthquakes) which have the potential for causing undesirable consequences to the facility worker, the onsite individual, or the public. The operational activities examined are fuel encapsulation, fuel storage and cooling. Encapsulation of sludges in the basins is not examined. A team of individuals from Westinghouse produced a set of Hazards and Operability (HAZOP) tables documenting their examination of abnormal process conditions in the systems and activities examined in K-Basins. The purpose of this report is to reevaluate and update the HAZOP in the original Preliminary Hazard Analysis of K-Basin Fuel Encapsulation and Storage originally developed in 1991

  10. Simplified procedures for fast reactor fuel cycle and sensitivity analysis

    International Nuclear Information System (INIS)

    Badruzzaman, A.

    1979-01-01

    The Continuous Slowing Down-Integral Transport Theory has been extended to perform criticality calculations in a Fast Reactor Core-blanket system achieving excellent prediction of the spectrum and the eigenvalue. The integral transport parameters did not need recalculation with source iteration and were found to be relatively constant with exposure. Fuel cycle parameters were accurately predicted when these were not varied, thus reducing a principal potential penalty of the Intergal Transport approach where considerable effort may be required to calculate transport parameters in more complicated geometries. The small variation of the spectrum in the central core region, and its weak dependence on exposure for both this region, the core blanket interface and blanket region led to the extension and development of inexpensive simplified procedures to complement exact methods. These procedures gave accurate predictions of the key fuel cycle parameters such as cost and their sensitivity to variation in spectrum-averaged and multigroup cross sections. They also predicted the implications of design variation on these parameters very well. The accuracy of these procedures and their use in analyzing a wide variety of sensitivities demonstrate the potential utility of survey calculations in Fast Reactor analysis and fuel management

  11. Relative sensitivity analysis of the predictive properties of sloppy models.

    Science.gov (United States)

    Myasnikova, Ekaterina; Spirov, Alexander

    2018-01-25

    Commonly among the model parameters characterizing complex biological systems are those that do not significantly influence the quality of the fit to experimental data, so-called "sloppy" parameters. The sloppiness can be mathematically expressed through saturating response functions (Hill's, sigmoid) thereby embodying biological mechanisms responsible for the system robustness to external perturbations. However, if a sloppy model is used for the prediction of the system behavior at the altered input (e.g. knock out mutations, natural expression variability), it may demonstrate the poor predictive power due to the ambiguity in the parameter estimates. We introduce a method of the predictive power evaluation under the parameter estimation uncertainty, Relative Sensitivity Analysis. The prediction problem is addressed in the context of gene circuit models describing the dynamics of segmentation gene expression in Drosophila embryo. Gene regulation in these models is introduced by a saturating sigmoid function of the concentrations of the regulatory gene products. We show how our approach can be applied to characterize the essential difference between the sensitivity properties of robust and non-robust solutions and select among the existing solutions those providing the correct system behavior at any reasonable input. In general, the method allows to uncover the sources of incorrect predictions and proposes the way to overcome the estimation uncertainties.

  12. Sensitivity analysis of energy demands on performance of CCHP system

    International Nuclear Information System (INIS)

    Li, C.Z.; Shi, Y.M.; Huang, X.H.

    2008-01-01

    Sensitivity analysis of energy demands is carried out in this paper to study their influence on performance of CCHP system. Energy demand is a very important and complex factor in the optimization model of CCHP system. Average, uncertainty and historical peaks are adopted to describe energy demands. The mix-integer nonlinear programming model (MINLP) which can reflect the three aspects of energy demands is established. Numerical studies are carried out based on energy demands of a hotel and a hospital. The influence of average, uncertainty and peaks of energy demands on optimal facility scheme and economic advantages of CCHP system are investigated. The optimization results show that the optimal GT's capacity and economy of CCHP system mainly lie on the average energy demands. Sum of capacities of GB and HE is equal to historical heating demand peaks, and sum of capacities of AR and ER are equal to historical cooling demand peaks. Maximum of PG is sensitive with historical peaks of energy demands and not influenced by uncertainty of energy demands, while the corresponding influence on DH is adverse

  13. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  14. A Sensitivity Analysis Approach to Identify Key Environmental Performance Factors

    Directory of Open Access Journals (Sweden)

    Xi Yu

    2014-01-01

    Full Text Available Life cycle assessment (LCA is widely used in design phase to reduce the product’s environmental impacts through the whole product life cycle (PLC during the last two decades. The traditional LCA is restricted to assessing the environmental impacts of a product and the results cannot reflect the effects of changes within the life cycle. In order to improve the quality of ecodesign, it is a growing need to develop an approach which can reflect the changes between the design parameters and product’s environmental impacts. A sensitivity analysis approach based on LCA and ecodesign is proposed in this paper. The key environmental performance factors which have significant influence on the products’ environmental impacts can be identified by analyzing the relationship between environmental impacts and the design parameters. Users without much environmental knowledge can use this approach to determine which design parameter should be first considered when (redesigning a product. A printed circuit board (PCB case study is conducted; eight design parameters are chosen to be analyzed by our approach. The result shows that the carbon dioxide emission during the PCB manufacture is highly sensitive to the area of PCB panel.

  15. Sludge Treatment Project Engineered Container Retrieval And Transfer System Preliminary Design Hazard Analysis Supplement 1

    International Nuclear Information System (INIS)

    Franz, G.R.; Meichle, R.H.

    2011-01-01

    This 'What/If' Hazards Analysis addresses hazards affecting the Sludge Treatment Project Engineered Container Retrieval and Transfer System (ECRTS) NPH and external events at the preliminary design stage. In addition, the hazards of the operation sequence steps for the mechanical handling operations in preparation of Sludge Transport and Storage Container (STSC), disconnect STSC and prepare STSC and Sludge Transport System (STS) for shipping are addressed.

  16. Preliminary Failure Modes and Effects Analysis of the US DCLL Test Blanket Module

    Energy Technology Data Exchange (ETDEWEB)

    Lee C. Cadwallader

    2010-06-01

    This report presents the results of a preliminary failure modes and effects analysis (FMEA) of a small tritium-breeding test blanket module design for the International Thermonuclear Experimental Reactor. The FMEA was quantified with “generic” component failure rate data, and the failure events are binned into postulated initiating event families and frequency categories for safety assessment. An appendix to this report contains repair time data to support an occupational radiation exposure assessment for test blanket module maintenance.

  17. Preliminary Failure Modes and Effects Analysis of the US DCLL Test Blanket Module

    Energy Technology Data Exchange (ETDEWEB)

    Lee C. Cadwallader

    2007-08-01

    This report presents the results of a preliminary failure modes and effects analysis (FMEA) of a small tritium-breeding test blanket module design for the International Thermonuclear Experimental Reactor. The FMEA was quantified with “generic” component failure rate data, and the failure events are binned into postulated initiating event families and frequency categories for safety assessment. An appendix to this report contains repair time data to support an occupational radiation exposure assessment for test blanket module maintenance.

  18. Preliminary Failure Modes and Effects Analysis of the US DCLL Test Blanket Module

    International Nuclear Information System (INIS)

    Lee C. Cadwallader

    2007-01-01

    This report presents the results of a preliminary failure modes and effects analysis (FMEA) of a small tritium-breeding test blanket module design for the International Thermonuclear Experimental Reactor. The FMEA was quantified with 'generic' component failure rate data, and the failure events are binned into postulated initiating event families and frequency categories for safety assessment. An appendix to this report contains repair time data to support an occupational radiation exposure assessment for test blanket module maintenance

  19. Sensitivity Study on Analysis of Reactor Containment Response to LOCA

    International Nuclear Information System (INIS)

    Chung, Ku Young; Sung, Key Yong

    2010-01-01

    As a reactor containment vessel is the final barrier to the release of radioactive material during design basis accidents (DBAs), its structural integrity must be maintained by withstanding the high pressure conditions resulting from DBAs. To verify the structural integrity of the containment, response analyses are performed to get the pressure transient inside the containment after DBAs, including loss of coolant accidents (LOCAs). The purpose of this study is to give regulative insights into the importance of input variables in the analysis of containment responses to a large break LOCA (LBLOCA). For the sensitivity study, a LBLOCA in Kori 3 and 4 nuclear power plant (NPP) is analyzed by CONTEMPT-LT computer code

  20. Sensitivity Study on Analysis of Reactor Containment Response to LOCA

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Ku Young; Sung, Key Yong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2010-10-15

    As a reactor containment vessel is the final barrier to the release of radioactive material during design basis accidents (DBAs), its structural integrity must be maintained by withstanding the high pressure conditions resulting from DBAs. To verify the structural integrity of the containment, response analyses are performed to get the pressure transient inside the containment after DBAs, including loss of coolant accidents (LOCAs). The purpose of this study is to give regulative insights into the importance of input variables in the analysis of containment responses to a large break LOCA (LBLOCA). For the sensitivity study, a LBLOCA in Kori 3 and 4 nuclear power plant (NPP) is analyzed by CONTEMPT-LT computer code

  1. Emissivity compensated spectral pyrometry—algorithm and sensitivity analysis

    International Nuclear Information System (INIS)

    Hagqvist, Petter; Sikström, Fredrik; Christiansson, Anna-Karin; Lennartson, Bengt

    2014-01-01

    In order to solve the problem of non-contact temperature measurements on an object with varying emissivity, a new method is herein described and evaluated. The method uses spectral radiance measurements and converts them to temperature readings. It proves to be resilient towards changes in spectral emissivity and tolerates noisy spectral measurements. It is based on an assumption of smooth changes in emissivity and uses historical values of spectral emissivity and temperature for estimating current spectral emissivity. The algorithm, its constituent steps and accompanying parameters are described and discussed. A thorough sensitivity analysis of the method is carried out through simulations. No rigorous instrument calibration is needed for the presented method and it is therefore industrially tractable. (paper)

  2. Sensitive Spectroscopic Analysis of Biomarkers in Exhaled Breath

    Science.gov (United States)

    Bicer, A.; Bounds, J.; Zhu, F.; Kolomenskii, A. A.; Kaya, N.; Aluauee, E.; Amani, M.; Schuessler, H. A.

    2018-06-01

    We have developed a novel optical setup which is based on a high finesse cavity and absorption laser spectroscopy in the near-IR spectral region. In pilot experiments, spectrally resolved absorption measurements of biomarkers in exhaled breath, such as methane and acetone, were carried out using cavity ring-down spectroscopy (CRDS). With a 172-cm-long cavity, an efficient optical path of 132 km was achieved. The CRDS technique is well suited for such measurements due to its high sensitivity and good spectral resolution. The detection limits for methane of 8 ppbv and acetone of 2.1 ppbv with spectral sampling of 0.005 cm-1 were achieved, which allowed to analyze multicomponent gas mixtures and to observe absorption peaks of 12CH4 and 13CH4. Further improvements of the technique have the potential to realize diagnostics of health conditions based on a multicomponent analysis of breath samples.

  3. Sequence length variation, indel costs, and congruence in sensitivity analysis

    DEFF Research Database (Denmark)

    Aagesen, Lone; Petersen, Gitte; Seberg, Ole

    2005-01-01

    The behavior of two topological and four character-based congruence measures was explored using different indel treatments in three empirical data sets, each with different alignment difficulties. The analyses were done using direct optimization within a sensitivity analysis framework in which...... the cost of indels was varied. Indels were treated either as a fifth character state, or strings of contiguous gaps were considered single events by using linear affine gap cost. Congruence consistently improved when indels were treated as single events, but no congruence measure appeared as the obviously...... preferable one. However, when combining enough data, all congruence measures clearly tended to select the same alignment cost set as the optimal one. Disagreement among congruence measures was mostly caused by a dominant fragment or a data partition that included all or most of the length variation...

  4. Preliminary crystallographic analysis of a possible transcription factor encoded by the mimivirus L544 gene

    International Nuclear Information System (INIS)

    Ciaccafava, Alexandre; Lartigue, Audrey; Mansuelle, Pascal; Jeudy, Sandra; Abergel, Chantal

    2011-01-01

    The mimivirus L544 gene product was expressed in E. coli and crystallized; preliminary phasing of a MAD data set was performed using the selenium signal present in a crystal of recombinant selenomethionine-substituted protein. Mimivirus is the prototype of a new family (the Mimiviridae) of nucleocytoplasmic large DNA viruses (NCLDVs), which already include the Poxviridae, Iridoviridae, Phycodnaviridae and Asfarviridae. Mimivirus specifically replicates in cells from the genus Acanthamoeba. Proteomic analysis of purified mimivirus particles revealed the presence of many subunits of the DNA-directed RNA polymerase II complex. A fully functional pre-transcriptional complex appears to be loaded in the virions, allowing mimivirus to initiate transcription within the host cytoplasm immediately upon infection independently of the host nuclear apparatus. To fully understand this process, a systematic study of mimivirus proteins that are predicted (by bioinformatics) or suspected (by proteomic analysis) to be involved in transcription was initiated by cloning and expressing them in Escherichia coli in order to determine their three-dimensional structures. Here, preliminary crystallographic analysis of the recombinant L544 protein is reported. The crystals belonged to the orthorhombic space group C222 1 with one monomer per asymmetric unit. A MAD data set was used for preliminary phasing using the selenium signal present in a selenomethionine-substituted protein crystal

  5. Global sensitivity analysis using low-rank tensor approximations

    International Nuclear Information System (INIS)

    Konakli, Katerina; Sudret, Bruno

    2016-01-01

    In the context of global sensitivity analysis, the Sobol' indices constitute a powerful tool for assessing the relative significance of the uncertain input parameters of a model. We herein introduce a novel approach for evaluating these indices at low computational cost, by post-processing the coefficients of polynomial meta-models belonging to the class of low-rank tensor approximations. Meta-models of this class can be particularly efficient in representing responses of high-dimensional models, because the number of unknowns in their general functional form grows only linearly with the input dimension. The proposed approach is validated in example applications, where the Sobol' indices derived from the meta-model coefficients are compared to reference indices, the latter obtained by exact analytical solutions or Monte-Carlo simulation with extremely large samples. Moreover, low-rank tensor approximations are confronted to the popular polynomial chaos expansion meta-models in case studies that involve analytical rank-one functions and finite-element models pertinent to structural mechanics and heat conduction. In the examined applications, indices based on the novel approach tend to converge faster to the reference solution with increasing size of the experimental design used to build the meta-model. - Highlights: • A new method is proposed for global sensitivity analysis of high-dimensional models. • Low-rank tensor approximations (LRA) are used as a meta-modeling technique. • Analytical formulas for the Sobol' indices in terms of LRA coefficients are derived. • The accuracy and efficiency of the approach is illustrated in application examples. • LRA-based indices are compared to indices based on polynomial chaos expansions.

  6. EH AND S ANALYSIS OF DYE-SENSITIZED PHOTOVOLTAIC SOLAR CELL PRODUCTION

    International Nuclear Information System (INIS)

    BOWERMAN, B.; FTHENAKIS, V.

    2001-01-01

    Photovoltaic solar cells based on a dye-sensitized nanocrystalline titanium dioxide photoelectrode have been researched and reported since the early 1990's. Commercial production of dye-sensitized photovoltaic solar cells has recently been reported in Australia. In this report, current manufacturing methods are described, and estimates are made of annual chemical use and emissions during production. Environmental, health and safety considerations for handling these materials are discussed. This preliminary EH and S evaluation of dye-sensitized titanium dioxide solar cells indicates that some precautions will be necessary to mitigate hazards that could result in worker exposure. Additional information required for a more complete assessment is identified

  7. Sensitivity analysis on various parameters for lattice analysis of DUPIC fuel with WIMS-AECL code

    Energy Technology Data Exchange (ETDEWEB)

    Roh, Gyu Hong; Choi, Hang Bok; Park, Jee Won [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    The code WIMS-AECL has been used for the lattice analysis of DUPIC fuel. The lattice parameters calculated by the code is sensitive to the choice of number of parameters, such as the number of tracking lines, number of condensed groups, mesh spacing in the moderator region, other parameters vital to the calculation of probabilities and burnup analysis. We have studied this sensitivity with respect to these parameters and recommend their proper values which are necessary for carrying out the lattice analysis of DUPIC fuel.

  8. Sensitivity analysis on various parameters for lattice analysis of DUPIC fuel with WIMS-AECL code

    Energy Technology Data Exchange (ETDEWEB)

    Roh, Gyu Hong; Choi, Hang Bok; Park, Jee Won [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    The code WIMS-AECL has been used for the lattice analysis of DUPIC fuel. The lattice parameters calculated by the code is sensitive to the choice of number of parameters, such as the number of tracking lines, number of condensed groups, mesh spacing in the moderator region, other parameters vital to the calculation of probabilities and burnup analysis. We have studied this sensitivity with respect to these parameters and recommend their proper values which are necessary for carrying out the lattice analysis of DUPIC fuel.

  9. An Overview of the Design and Analysis of Simulation Experiments for Sensitivity Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2004-01-01

    Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models.This review surveys classic and modern designs for experiments with simulation models.Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc.These designs assume a

  10. Preliminary CFD Analysis for HVAC System Design of a Containment Building

    Energy Technology Data Exchange (ETDEWEB)

    Son, Sung Man; Choi, Choengryul [ELSOLTEC, Yongin (Korea, Republic of); Choo, Jae Ho; Hong, Moonpyo; Kim, Hyungseok [KEPCO Engineering and Construction, Gimcheon (Korea, Republic of)

    2016-10-15

    HVAC (Heating, Ventilation, Air Conditioning) system has been mainly designed based on overall heat balance and averaging concepts, which is simple and useful for designing overall system. However, such a method has the disadvantage that cannot predict the local flow and temperature distributions in a containment building. In this study, a CFD (Computational Fluid Dynamics) preliminary analysis is carried out to obtain detailed flow and temperature distributions in a containment building and to ensure that such information can be obtained via CFD analysis. This approach can be useful for hydrogen analysis in an accident related to hydrogen released into a containment building. In this study, CFD preliminary analysis has been performed to obtain the detailed information of the reactor containment building by using the CFD analysis techniques and to ensure that such information can be obtained via CFD analysis. We confirmed that CFD analysis can offer enough detailed information about flow patterns and temperature field and that CFD technique is a useful tool for HVAC design of nuclear power plants.

  11. Relative performance of academic departments using DEA with sensitivity analysis.

    Science.gov (United States)

    Tyagi, Preeti; Yadav, Shiv Prasad; Singh, S P

    2009-05-01

    The process of liberalization and globalization of Indian economy has brought new opportunities and challenges in all areas of human endeavor including education. Educational institutions have to adopt new strategies to make best use of the opportunities and counter the challenges. One of these challenges is how to assess the performance of academic programs based on multiple criteria. Keeping this in view, this paper attempts to evaluate the performance efficiencies of 19 academic departments of IIT Roorkee (India) through data envelopment analysis (DEA) technique. The technique has been used to assess the performance of academic institutions in a number of countries like USA, UK, Australia, etc. But we are using it first time in Indian context to the best of our knowledge. Applying DEA models, we calculate technical, pure technical and scale efficiencies and identify the reference sets for inefficient departments. Input and output projections are also suggested for inefficient departments to reach the frontier. Overall performance, research performance and teaching performance are assessed separately using sensitivity analysis.

  12. Preliminary hazard analysis for the Brayton Isotope Ground Demonstration System (including vacuum test chamber)

    International Nuclear Information System (INIS)

    Miller, L.G.

    1975-01-01

    The Preliminary Hazard Analysis (PHA) of the BIPS-GDS is a tabular summary of hazards and undesired events which may lead to system damage or failure and/or hazard to personnel. The PHA reviews the GDS as it is envisioned to operate in the Vacuum Test Chamber (VTC) of the GDS Test Facility. The VTC and other equipment which will comprise the test facility are presently in an early stage of preliminary design and will undoubtedly undergo numerous changes before the design is frozen. The PHA and the FMECA to follow are intended to aid the design effort by identifying areas of concern which are critical to the safety and reliability of the BIPS-GDS and test facility

  13. Expression, purification, crystallization and preliminary crystallographic analysis of the proliferation-associated protein Ebp1

    International Nuclear Information System (INIS)

    Kowalinski, Eva; Bange, Gert; Wild, Klemens; Sinning, Irmgard

    2007-01-01

    Preliminary X-ray analysis of the proliferation-associated protein Ebp1 from Homo sapiens is provided. ErbB-3-binding protein 1 (Ebp1) is a member of the family of proliferation-associated 2G4 proteins (PA2G4s) and plays a role in cellular growth and differentiation. Ligand-induced activation of the transmembrane receptor ErbB3 leads to dissociation of Ebp1 from the receptor in a phosphorylation-dependent manner. The non-associated protein is involved in transcriptional and translational regulation in the cell. Here, the overexpression, purification, crystallization and preliminary crystallographic studies of Ebp1 from Homo sapiens are reported. Initially observed crystals were improved by serial seeding to single crystals suitable for data collection. The optimized crystals belong to the tetragonal space group P4 1 2 1 2 or P4 3 2 1 2 and diffracted to a resolution of 1.6 Å

  14. Most significant preliminary results of the probabilistic safety analysis on the Juragua nuclear power plant

    International Nuclear Information System (INIS)

    Perdomo, Manuel

    1995-01-01

    Since 1990 the Group for PSA Development and Applications (GDA/APS) is working on the Level-1 PSA for the Juragua-1 NPP, as a part of an IAEA Technical Assistance Project. The main objective of this study, which is still under way, is to assess, in a preliminary way, the Reactor design safety to find its potential 'weak points' at the construction stage, using a eneric data base. At the same time, the study allows the PSA team to familiarize with the plant design and analysis techniques for the future operational PSA of the plant. This paper presents the most significant preliminary results of the study, which reveal some advantages of the safety characteristics of the plant design in comparison with the homologous VVER-440 reactors and some areas, where including slight modifications would improve the plant safety, considering the level of detail at which the study is carried out. (author). 13 refs, 1 fig, 2 tabs

  15. PDASAC, Partial Differential Sensitivity Analysis of Stiff System

    International Nuclear Information System (INIS)

    Caracotsios, M.; Stewart, W.E.

    2001-01-01

    1 - Description of program or function: PDASAC solves stiff, nonlinear initial-boundary-value problems in a timelike dimension t and a space dimension x. Plane, circular cylindrical or spherical boundaries can be handled. Mixed-order systems of partial differential and algebraic equations can be analyzed with members of order or 0 or 1 in t, 0, 1 or 2 in x. Parametric sensitivities of the calculated states are computed simultaneously on request, via the Jacobian of the state equations. Initial and boundary conditions are efficiently reconciled. Local error control (in the max-norm or the 2-norm) is provided for the state vector and can include the parametric sensitivities if desired. 2 - Method of solution: The method of lines is used, with a user- selected x-grid and a minimum-bandwidth finite-difference approximations of the x-derivatives. Starting conditions are reconciled with a damped Newton algorithm adapted from Bain and Stewart (1991). Initial step selection is done by the first-order algorithms of Shampine (1987), extended here to differential- algebraic equation systems. The solution is continued with the DASSL predictor-corrector algorithm (Petzold 1983, Brenan et al. 1989) with the initial acceleration phase deleted and with row scaling of the Jacobian added. The predictor and corrector are expressed in divided-difference form, with the fixed-leading-coefficient form of corrector (Jackson and Sacks-Davis 1989; Brenan et al. 1989). Weights for the error tests are updated in each step with the user's tolerances at the predicted state. Sensitivity analysis is performed directly on the corrector equations of Caracotsios and Stewart (1985) and is extended here to the initialization when needed. 3 - Restrictions on the complexity of the problem: This algorithm, like DASSL, performs well on differential-algebraic equation systems of index 0 and 1 but not on higher-index systems; see Brenan et al. (1989). The user assigned the work array lengths and the output

  16. Sensitivity Analysis for Steady State Groundwater Flow Using Adjoint Operators

    Science.gov (United States)

    Sykes, J. F.; Wilson, J. L.; Andrews, R. W.

    1985-03-01

    Adjoint sensitivity theory is currently being considered as a potential method for calculating the sensitivity of nuclear waste repository performance measures to the parameters of the system. For groundwater flow systems, performance measures of interest include piezometric heads in the vicinity of a waste site, velocities or travel time in aquifers, and mass discharge to biosphere points. The parameters include recharge-discharge rates, prescribed boundary heads or fluxes, formation thicknesses, and hydraulic conductivities. The derivative of a performance measure with respect to the system parameters is usually taken as a measure of sensitivity. To calculate sensitivities, adjoint sensitivity equations are formulated from the equations describing the primary problem. The solution of the primary problem and the adjoint sensitivity problem enables the determination of all of the required derivatives and hence related sensitivity coefficients. In this study, adjoint sensitivity theory is developed for equations of two-dimensional steady state flow in a confined aquifer. Both the primary flow equation and the adjoint sensitivity equation are solved using the Galerkin finite element method. The developed computer code is used to investigate the regional flow parameters of the Leadville Formation of the Paradox Basin in Utah. The results illustrate the sensitivity of calculated local heads to the boundary conditions. Alternatively, local velocity related performance measures are more sensitive to hydraulic conductivities.

  17. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA

    2016-04-08

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance

  18. An Application of Monte-Carlo-Based Sensitivity Analysis on the Overlap in Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    S. Razmyan

    2012-01-01

    Full Text Available Discriminant analysis (DA is used for the measurement of estimates of a discriminant function by minimizing their group misclassifications to predict group membership of newly sampled data. A major source of misclassification in DA is due to the overlapping of groups. The uncertainty in the input variables and model parameters needs to be properly characterized in decision making. This study combines DEA-DA with a sensitivity analysis approach to an assessment of the influence of banks’ variables on the overall variance in overlap in a DA in order to determine which variables are most significant. A Monte-Carlo-based sensitivity analysis is considered for computing the set of first-order sensitivity indices of the variables to estimate the contribution of each uncertain variable. The results show that the uncertainties in the loans granted and different deposit variables are more significant than uncertainties in other banks’ variables in decision making.

  19. Sensitivity analysis of alkaline plume modelling: influence of mineralogy

    International Nuclear Information System (INIS)

    Gaboreau, S.; Claret, F.; Marty, N.; Burnol, A.; Tournassat, C.; Gaucher, E.C.; Munier, I.; Michau, N.; Cochepin, B.

    2010-01-01

    Document available in extended abstract form only. In the context of a disposal facility for radioactive waste in clayey geological formation, an important modelling effort has been carried out in order to predict the time evolution of interacting cement based (concrete or cement) and clay (argillites and bentonite) materials. The high number of modelling input parameters associated with non negligible uncertainties makes often difficult the interpretation of modelling results. As a consequence, it is necessary to carry out sensitivity analysis on main modelling parameters. In a recent study, Marty et al. (2009) could demonstrate that numerical mesh refinement and consideration of dissolution/precipitation kinetics have a marked effect on (i) the time necessary to numerically clog the initial porosity and (ii) on the final mineral assemblage at the interface. On the contrary, these input parameters have little effect on the extension of the alkaline pH plume. In the present study, we propose to investigate the effects of the considered initial mineralogy on the principal simulation outputs: (1) the extension of the high pH plume, (2) the time to clog the porosity and (3) the alteration front in the clay barrier (extension and nature of mineralogy changes). This was done through sensitivity analysis on both concrete composition and clay mineralogical assemblies since in most published studies, authors considered either only one composition per materials or simplified mineralogy in order to facilitate or to reduce their calculation times. 1D Cartesian reactive transport models were run in order to point out the importance of (1) the crystallinity of concrete phases, (2) the type of clayey materials and (3) the choice of secondary phases that are allowed to precipitate during calculations. Two concrete materials with either nanocrystalline or crystalline phases were simulated in contact with two clayey materials (smectite MX80 or Callovo- Oxfordian argillites). Both

  20. Development of a System Analysis Toolkit for Sensitivity Analysis, Uncertainty Propagation, and Estimation of Parameter Distribution

    International Nuclear Information System (INIS)

    Heo, Jaeseok; Kim, Kyung Doo

    2015-01-01

    Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM